Crowdsourcing made better

765140960_735722ddf8_zRead an article the other day in MIT News (Better wisdom from crowds) about a new approach to drawing out better information from crowdsourced surveys. It’s based on something the researchers have named the “surprising popularity” algorithm.

Normally, when someone performs a crowdsourced survey, the results of the survey are typically some statistically based (simple or confidence weighted) average of all the responses. But this may not be correct because, if the majority are ill-informed then any average of their responses will most likely be incorrect.

Surprisingly popular?

10955401155_89f0f3f05a_zWhat surprising popularity does, is it asks respondents what they believe will be the most popular answer to a question and then asks what the respondent believes the correct answer to the question. It’s these two answers that they then use to choose the most surprisingly popular answer.

For example, lets say the answer the surveyors are looking for is the capital of Pennsylvania (PA, a state in the eastern USA) Philadelphia or not. They ask everyone what answer would be the most popular answer. In this case yes, because Philadelphia is large and well known and historically important. But they then ask for a yes or no on whether Philadelphia is the capital of PA. Of course the answer they get back from the crowd here is also yes.

But, a sizable contingent would answer that the capital of PA is  Philadelphia wrong (it is actually Harisburg). And because there’s a (knowledgeable) group that all answers the same (no) this becomes the “surprisingly popular” answer and this is the answer the surprisingly popular algorithm would choose.

What it means

The MIT researchers indicated that their approach reduced errors by 21.3% over a simple majority and 24.2% over a confidence weighted average.

What the researchers have found, is that surprisingly popular algorithm can be used to identify a knowledgeable subset of individuals in the respondents that knows the correct answer.  By knowing the most popular answer, the algorithm can discount this and then identify the surprisingly popular (next most frequent) answer and use this as the result of the survey.

Where might this be useful?

In our (USA) last election there were quite a few false news stories that were sent out via social media (Facebook and Twitter). If there were a mechanism to survey the readers of these stories that asked both whether this story was false/made up or not and asked what the most popular answer would be, perhaps the new story truthfulness could be completely established by the crowd.

In the past, there were a number of crowdsourced markets that were being used to predict stock movements, commodity production and other securities market values. Crowd sourcing using surprisingly popular methods might be used to better identify the correct answer from the crowd.

Problems with surprisingly popular methods

The one issue is that this approach could be gamed. If a group wanted some answer (lets say that a news story was true), they could easily indicate that the most popular answer would be false and then the method would fail. But it would fail in any case if the group could command a majority of responses, so it’s no worse than any other crowdsourced approach.

Comments?

Photo Credit(s): Crowd shot by Andrew WestLost in the crowd by Eric Sonstroem

 

Facebook moving to JBOF (just a bunch of flash)

At Flash Memory Summit (FMS 2016) this past week, Vijay Rao, Director of Technology Strategy at Facebook gave a keynote session on some of the areas that Facebook is focused on for flash storage. One thing that stood out as a significant change of direction was a move to JBOFs in their datacenters.

As you may recall, Facebook was an early adopter of (FusionIO’s) server flash cards to accelerate their applications. But they are moving away from that technology now.

Insane growth at Facebook

Why? Vijay started his talk about some of the growth they have seen over the years in photos, videos, messages, comments, likes, etc. Each was depicted as a animated bubble chart, with a timeline on the horizontal axis and a growth measurement in % on the vertical axis, with the size of the bubble being the actual quantity of each element.

Although the user activity growth rates all started out small at different times and grew at different rates during their individual timelines, by the end of each video, they were all almost at 90-100% growth, in 4Q15 (assume this is yearly growth rate but could be wrong).

Vijay had similar slides showing the growth of their infrastructure, i.e.,  compute, storage and networking. But although infrastructure grew less quickly than user activity (messages/videos/photos/etc.), they all showed similar trends and ended up (as far as I could tell) at ~70% growth.
Continue reading “Facebook moving to JBOF (just a bunch of flash)”

Facebook down to 1.08 PUE and counting for cold storage

prineville-servers-470Read a recent article in ArsTechnica about Facebook’s cold storage archive and their sustainable data centers (How Facebook puts petabytes of old cat pix on ice in the name of sustainability). In the article there was a statement that Facebook had achieved a 1.08 PUE (Power Usage Effectiveness) for one of these data centers. This means for every 100 Watts used to power up racks, Facebook needed to add 8 Watts for other overhead.

Just last year I wrote a paper for a client where I interviewed the CEO of an outsourced data center provider (DuPont Fabros Technology) whose state of the art new data centers were achieving a PUE of from 1.14 to 1.18. For Facebook to run their cold storage data centers at 1.08 PUE is even better.

At the moment, Facebook has two cold storage data centers one at Prineville, OR and the other at Forest City, NC (Forest City achieved the 1.08 PUE). The two cold data storage sites add to the other Facebook data centers that handle everything else in the Facebook universe.

MAID to the rescue

First off these are just cold storage data centers, over an EB of data, but still archive storage, racks and racks of it. How they decide something is cold or hot seems to depend on last use. For example, if a picture has been referenced recently then it’s warm, if not then it’s cold.

Second, they have taken MAID (massive array of idle disks) to a whole new data center level. That is each 1U (Knox storage tray) shelf has 30 4TB drives and a rack has 16 of these storage trays, holding 1.92PB of data. At any one time, only one drive in each storage tray is powered up at a time. The racks have dual servers and only one power shelf (due to the reduced power requirements).

They also use pre-fetch hints provided by the Facebook application to cache user data.  This means they will fetch some images ahead of time,when users areis paging through photos in stream in order to have them in cache when needed. After the user looks at or passes up a photo, it is jettisoned from cache, the next photo is pre-fetched. When the disks are no longer busy, they are powered down.

Less power conversions lower PUE

Another thing Facebook is doing is reducing the number of power conversions that need to happen to power racks. In a typical data center power comes in at 480 Volts AC,  flows through the data center UPS and then is dropped down to 208 Volts AC at the PDU which flows to the rack power supply which is then converted to 12 Volts DC.  Each conversion of electricity generally sucks up power and in the end only 85% of the energy coming in reaches the rack’s servers and storage.

In Facebooks data centers, 480 Volts AC is channeled directly to the racks which have an in rack battery backup/UPS and rack’s power bus converts the 480 Volt AC to 12 Volt DC or AC directly as needed. By cutting out the data center level UPS and the PDU energy conversion they save lots of energy overhead which can be used to better power the racks.

Free air cooling helps

Facebook data centers like Prineville also make use of “fresh air cooling” that mixes data center air with outside air, that flows through through “wetted media” to cool which is then sent down to cool the racks by convection.  This process keeps the rack servers and storage within the proper temperature range but probably run hotter than most data centers this way. How much fresh air is brought in depends on outside temperature, but during most months, it works very well.

This is in contrast to standard data centers that use chillers, fans and pumps to keep the data center air moving, conditioned and cold enough to chill the equipment. All those fans, pumps and chillers can consume a lot of energy.

Renewable energy, too

Lately, Facebook has made obtaining renewable energy to power their data centers a high priority. One new data center close to the Arctic Circle was built there because of hydro-power, another in Iowa and one in Texas were built in locations with wind power.

All of this technology, open sourced

Facebook has open sourced all of it’s hardware and data center systems. That is the specifications for all the hardware discussed above and more is available from the Open Compute Organization, including the storage specification(s), open rack specification(s) and data center specification(s) for these data centers.

So if you want to build your own cold storage archive that can achieve 1.08 PUE, just pick up their specs and have at it.

Comments?

Picture Credits: DataCenterKnowledge.Com

 

At Scale conference keynote, Facebook video experience re-engineered

11990439_1644273839179047_2244380699715442158_nThe At Scale conference happened this past week in LA. Jay Parikh, Global Head of Engineering and Infrastructure at Facebook, kicked off the conference by talking about how Facebook is attempting to conquer some of it’s intrinsic problems, as it scales up from over 1B users today. I was unable to attend the conference but watched a video of the keynote (on Facebook of course).

The At Scale community is a group of large, hyper-scale, web companies such as Google, Microsoft, Twitter, and of course Facebook, among a gaggle of others that all have problems trying to scale up their infrastructure to handle more and more users activities. They had 1800 people registered for the At Scale 2015 conference on Monday, double last years count. The At Scale community are trying to push the innovation level of the industry faster, through a community of companies that need to work at hyper-scale.

Facebook’s video problem

At Facebook the current hot problem that’s impacting customer satisfaction seems to be video uploads and playback (downloads). The issues with Facebook’s video experience are multifaceted and range from the time it takes to successfully upload a video, to the bandwidth it takes to playback a video to the -system requirements to support live streaming video to 100,000s of users.

Facebook started as a text only service, migrated to a photo oriented service, but now is quickly moving to a video oriented user experience. But it doesn’t stop there they can see on the horizon that augmented and virtual reality will become a significant driver of activity for Facebook uses?!

Daily video 1B last year now at 4B video views/day. They also launched a new service lately, LiveMentions, which was a live streaming service for celebrities (real time video streams). Several celebrities were live streaming to 150K of their subscribers. So video has become and will continue as the main consumer of bandwidth at Facebook.

Struggling to enhance the Facebook user’s video experience over the past year, they have come up with three key engineering principles that have helped them: Planning, Iteration and Performance.

Planning

Facebook is already operating a terabit scale network, so doing something to its network wrong is going to cause major problems, around the world. As a result, Facebook engineering focused early on, into incorporating lots of instrumentation in their network and infrastructure services. This has allowed them to constantly monitor the activity of their users across their infrastructure to identify problems and solutions.

One metric Parikh talked about was “playback success rate”, this is the percentage where the video starts to play in under 1 second for a facebook user.  One chart he showed, was a playback success rate colored ove a world map  but aggregated (averaged) at the country level. But with their instrumentation Facebook was able to drill down to regions within a country and  even cities within a region. This allows engineering to identify problems at almost any level of granularity they need.

One key take away to Planing, is if you have the instrumentation in place, have people to monitor and mine the data and are willing to address the problems that crop up, then you can create a more flexible, efficient and effective environment and build a better product for your users.

Iteration

Iteration is not just about feature deployment, but it’s also about the Facebook user experience. Their instrumentation had told them that they were doing ok on video uploads but it turns out that when they looked at the details, they saw that some customers were not having a satisfactory video upload experience. For instance, one Facebook engineer had to wait 82 hours to upload a video.

The Facebook world is populated with 10s of thousands of unique devices with different memory, compute and storage. They had to devise approaches that could optimize the encoding for all the different devices, some of which was done on mobile phones.

They also had to try to optimize the network stack for different devices and mobile networking technologies. Parikh had another map showing network connectivity. Surprise, most of the world is not on LTE, and a vast majority of world is on 2G and 3G cellular networks. So via iteration Facebook went about improving video upload by 1% here and 1% there, but with Facebook’s user base, these improvements impact millions of users. They used cross functional teams to address the problems they uncovered.

However, video uploads problems were not just in device and connectivity realms. Turns out they had a big cancel upload button on their screen after the start of the video upload. This was sometimes clicked by mistake and they found that almost 10% of users hit the cancel upload. So they went through and re-examined the whole user experience to try to eliminate other hindrances to successful video uploads.

Performance

The key take away from this segment of the talk was that performance has to be considered from the get go of a new service or service upgrade. It is impossible to improve performance after the fact, especially for At Scale environments.

In my CS classes, the view was make it work and then make it work fast.  What Facebook has found is that you never have the time after a product has shipped to make it fast. As soon as it works, they had to move on to the next problem.

As a result if performance is not built in from the start, not a critical requirement/feature of a system architecture and design, it never gets addressed. Also if all you focus on is making it work then the design and all the code is built around feature functionality. Changing working functionality later to improve performance is an impossible task and typically represents a re-architecture/re-design/re-implementation of the functionality.

For instance, Facebook used to do video encoding in serial on a single server. It often took a long time (10 to 30 minutes). Engineering reimplemented their video encoding to partition the video and distribute the encoding across multiple servers. Doing this, sped up encoding time considerably.

But they didn’t stop there, with such a diverse user networking environment, they felt that they could save bandwidth and better optimize user playback if could reduce playback video size. They were able to take their machine learning/AI investments that Facebook has made and apply this to distributed video encoding. They were able to analyze the video scene by scene and opportunistically reduce bandwidth load and storage size but still maintain video  playback quality. By implementing the new video encoding process they have achieved double digit reductions in bandwidth requirements for playback.

Another example of the importance of performance was the LiveMentions feature discussed above. Celebrities often record streams in places with poor networking infrastructure. So in order to insure a good streaming experience Facebook  had to implement variable bit rate video upload to adjust upload bandwidth requirements based on networking environmentr. Moreover, once a celebrity starts a live stream all the fans in the world get notified. then there’s a thundering herd (boot storms anyone) to start watching the video stream. In order to support this mass streaming, Facebook implemented stream blocking, which holds off the start of a live stream viewing until they have cached enough of the video stream at their edge servers, worldwide. This guaranteed that all the fans had a good viewing experience, once it started.

There were a couple more videos of the show sessions but I didn’t have time to review them.  But Facebook sounds like a fun place to work, especially for infrastructure performance experts.

~~~~

Comments?

Optical discs for Facebook cold storage

I heard last week that Facebook is implementing Blu Ray libraries for cold storage. Each BluRay disk holds ~100GB and they figure they can store 10,000 discs or ~1PB in a rack.

They bundle 12 discs in a cartridge and 36 cartridges in a magazine, placing 24 magazines in a cabinet, with BluRay drives and a robotic arm. The robot arm sits in the middle of the cabinet with the magazines/cartridges located on each side.

It’s unclear what Amazon Glacier uses for its storage but a retrieval time of 3-5 hours indicates removable media of some type.  I haven’t seen anything on Windows Azure offering a similar service but Google has released Durable Reduced Availability (DRA) storage which could potentially be hosted on removable media as well.  I was unable to find any access times specifications for Google DRA.

Why the interest in cold storage?

The article mentioned that Facebook is testing the new technology first on its compliance data. After that Facebook will start using it for cold photo storage. Facebook also said that it will be using different storage technologies for it’s cold storage repository mentioning “bad flash” as another alternative.

BluRay supports both a re-writeable as well as WORM (write once, read many times) technology. As such, WORM discs cannot be modified, only destroyed.  WORM technology would be very useful for anyone’s compliance data. The rewritable Blu Ray discs might be more effective for cold photo storage, however the fact that people on Facebook rarely delete photos, says WORM would work well here too.

100GB is a pretty small storage bucket these days but for compliance documents, such as email, invoices, contracts, etc. it’s plenty large.

Can Blu Ray optical provide data center cold storage?

Facebook didn’t discuss the specs on the robot arm that they were planning to use but with 10K cartridges it has a lot of work to do. Tape library robots move a single cartridge in about 11 seconds or so. If the optical robot could do as well (no information to the contrary) one robot arm could support ~4K disc moves per day. But that would be enterprise class robotics and 100% duty cycle, more likely 1/2 to 1/4 of this would be considered good for an off the shelf system like this. So maybe a 1000 to 2000 disc picks per day.

If we use 22 seconds per disc swap (two disc moves), a single robot/rack could support a maximum of 100 to 200TB of data writes per day (assuming robot speed was the only bottleneck).  In the video (see about 30 minutes in) the robot didn’t look all that fast as compared to a tape library robot, but maybe I am biased.

Near as I can tell a 12x BluRay drive can write at ~35MB/sec (SATA drive, writing single layer, 25GB disc, we assume this can be sustained for a 4-layer or dual-sided 2-layer 100GB disc). So to be able to write a full 100GB disk would take ~48 minutes and if you add to that the 22 seconds of disc swap time, one SATA drive running 100% flat out could maybe write 30 discs per day or ~3TB/day.

In the video, the BluRay drives appear to be located in an area above the disc magazines along each side. There appears to be two drives per column with 6 columns per side, so a maximum of 24 drives. With 24 drives, one rack could write about 72TB/day or 720 discs per day which would fit into our 22 seconds per swap.  At 72TB/day it’s going to take ~14 days to fill up a cabinet. I could be off on the drive count, they didn’t show the whole cabinet in the video, so it’s possible they have 12 columns per side, 48 drives per cabinet and 144TB/day.

All this assumes 100% duty cycle on the drives which is unreasonable for an enterprise class tape drive let alone a consumer class BluRay drive. This is also write speed, I assume that read speed is the same or better. Also, I didn’t see any servers in the cabinet and I assume that something has to be reading, writing and controlling the optical library. So these other servers need to be somewhere close by, but they could easily be located in a separate rack somewhere near to the library.

So it all makes some amount of sense from a system throughput perspective. Given what we know about the drive speed, cartridge capacity and robot capabilities, it’s certainly possible that the system could sustain the disc swaps and data transfer necessary to provide data center cold storage archive.

And the software

But there’s plenty of software that has to surround an optical library to make it useful. Somehow we would want to be able to identify a file as a candidate for cold storage then have it moved to some cold storage disc(s), cataloged, and then deleted from the non-cold storage repository.  Of course, we probably want 2 or more copies to be written, maybe these redundant copies should be written to different facilities or at least different cabinets.  The catalog to the cold storage repository is all important and needs to be available 24X7 so this needs to be redundant/protected, updated with extreme care, and from my perspective on some sort of high-speed storage to handle archives of 3EB.

What about OpenStack? Although there have been some rumblings by Oracle and others to provide tape support in OpenStack, nothing seems to be out yet. However, it’s not much of a stretch to see removable media support in OpenStack, if some large company were to put some effort into it.

Other cold storage alternatives

In the video, Facebook says they currently have 30PB of cold storage at one facility and are already in the process of building another. They said that they should have 150PB of cold storage online shortly and that each cold storage facility is capable of holding 3EB or 3,000PB of cold storage.

A couple of years back at Hitachi in Japan, we were shown a Blu Ray optical disc library using 50GB discs. This was just a prototype but they were getting pretty serious about it then. We also saw an update of this at an analyst meeting at HDS, a year or so later. So there’s at least one storage company working on this technology.

Facebook, seems to have decided they were better off developing their own approach. It’s probably more dense/space efficient and maybe even more power efficient but to tell that would take some spec comparisons which aren’t available from Facebook or HDS just yet.

Why not magnetic tape?

I see these large storage repository sizes and wonder if Facebook might not be better off using magnetic tape. It has a much larger capacity and I believe magnetic tape (LTO or enterprise) would supply better volumetric (bytes/in**3) density than the Blu Ray cabinet they showed in the video.

Facebook said that BluRay discs had a 50 year lifetime.  I believe enterprise and LTO tape vendors say their cartridges have a 30 year lifetime. And that might be one consideration driving them to optical.

The reality is that new LTO technology is coming out every 2-3 years or so, and new drives read only 2 generations back and write only the current technology. With that quick a turnover, a data center would probably have to migrate data from old to new tape technology every decade or so before old tape drives go out of warranty.

I have not seen any Blu Ray technology roadmaps so it’s hard to make a comparison, but to date, PC based Blu Ray drives typically can read and write CDs, DVDs, and current Blu Ray disks (which is probably 4 to 5 generations back). So they have a better reputation for backward compatibility over time.

Tape technology roadmaps are so quick because tape competes with disk, which doubles capacity every 18 months or so. I am sure tape drive and media vendors would be happy not to upgrade their technology so fast but then disk storage would take over more and more tape storage applications.

If Blu Ray were to become a data center storage standard, as Facebook seems to want, I believe that Blu Ray technology would fall under similar competitive pressures from both disk and tape to upgrade optical technology at a faster rate. When that happens, it would be interesting to see how quickly optical drives stop supporting the backward compatibility that they currently support.

Comments?

Photo Credit: [73/366] Grooves by Dwayne Bent [Ed. note, picture of DVD, not Blu Ray disc]

 

 

The rise of mobile and the death of rest

Read a couple of articles this week about the rise of mobile computing.  About a decade ago I was at a conference where one of the keynotes was on the inevitability of ubiquitous computing or everywhere computing.  I believe now that smart phones have arrived, we have realized that dream.

How big companies die

One article I read was from Forbes on Here’s why Google and Facebook might disappear in the next 5 years.  The central tenet of their discussion was that the rise of mobile is a new paradigm shift just like Web 1.0 and Web 2.0 emerged over time and reinvented most of the industries that went before them.

Most companies around before the internet were unable to see and understand what would constitute a viable business model in the new Web 1.0 environment. Similarly, the major players in Web 1.0 never really saw the transition that occurred to a more interactive, information sharing that became Web 2.0.

The problem is that all these companies grew up in the reigning paradigm of the day and became successful by seeing the transition as a new way of doing business. They just couldn’t conceive that another way of doing business was coming along that was strategically different and thus, highly damaging to their now outdated, business models.

Full speed ahead

But it even get’s worse. Another article I read from Tecnology Review was titled Questions for Mobile Computing.

One interesting tidbit is that time it’s taking to reach a certain adoption level in the market is shrinking. The chart (from Apple) showed that both the iPhone and iPad has drastically shrunk the time it took to attain high market adoption.

Mobile business models

The main question in the article was how web 2.0 advertising revenue business models were going to translate into a mobile environment where they no longer controlled advertising.  Many Web2.0 companies seem to be ignoring mobile at the moment but it won’t take long for companies focused on this new computing tsunami to roll over them.

Apple and Google have taken two distinctly divergent approaches to this market but at least they are (massively) engaged.  That’s more than can be said for some of the web 2.0 properties out there ignoring mobile to their long term detriment.

The fact is that mobile is a new computing platform.  It’s possibilities are truly extraordinary from mHealth (see my post on mHealth taking off in Kenya) and  mCurrency today to Google glasses of tomorrow.

I strongly believe that those companies that see this shift now and go after it with new business models to profit from mobile computing will succeed faster and mightier than we have ever seen.   The rest will be left in the dust.

The funny bit is that it’s not the developed world that’s taking the new model to new directions but the developing world.  They seem better able to see mobile computing for what it is, an relatively easy way to leapfrog from the 19th century to the 21st in one jump.

So what are profitable business models that leverage mobile computing?

NewSQL and the curse of Old SQL database systems

database 2 by Tim Morgan (cc) (from Flickr)
database 2 by Tim Morgan (cc) (from Flickr)

There was some twitter traffic yesterday on how Facebook was locked into using MySQL (see article here) and as such, was having to shard their MySQL database across 1000s of database partitions and memcached servers in order to keep up with the processing load.

The article indicated  that this was painful, costly and time consuming. Also they said Facebook would be better served moving to something else. One answer was to replace MySQL with recently emerging, NewSQL database technology.

One problem with old SQL database systems is they were never architected to scale beyond a single server.  As such, multi-server transactional operations was always a short-term fix to the underlying system, not a design goal. Sharding emerged as one way to distribute the data across multiple RDBMS servers.

What’s sharding?

Relational database tables are sharded by partitioning them via a key.  By hashing this key one can partition a busy table across a number of servers and use the hash function to lookup where to process/access table data.   An alternative to hashing is to use a search lookup function to determine which server has the table data you need and process it there.

In any case, sharding causes a number of new problems. Namely,

  • Cross-shard joins – anytime you need data from more than one shard server you lose the advantages of distributing data across nodes. Thus, cross-shard joins need to be avoided to retain performance.
  • Load balancing shards – to spread workload you need to split the data by processing activity.  But, knowing ahead of time what the table processing will look like is hard and one weeks processing may vary considerably from the next weeks load. As such, it’s hard to load balance shard servers.
  • Non-consistent shards – by spreading transactions across multiple database servers and partitions, transactional consistency can no longer be guaranteed.  While for some applications this may not be a concern, traditional RDBMS activity is consistent.

These are just some of the issues with sharding and I am certain there are more.

What about Hadoop projects and its alternatives?

One possibility is to use Hadoop and its distributed database solutions.  However, Hadoop systems were not intended to be used for transaction processing. Nonetheless, Cassandra and HyperTable (see my post on Hadoop – Part 2) can be used for transaction processing and at least Casandra can be tailored to any consistency level. But both Cassandra and HyperTable are not really meant to support high throughput, consistent transaction processing.

Also, the other, non-Hadoop distributed database solutions support data analytics and most are not positioned as transaction processing systems (see Big Data – Part 3).  Although Teradata might be considered the lone exception here and can be a very capable transaction oriented database system in addition to its data warehouse operations. But it’s probably not widely distributed or scaleable above a certain threshold.

The problems with most of the Hadoop and non-Hadoop systems above mainly revolve around the lack of support for ACID transactions, i.e., atomic, consistent, isolated, and durable transaction processing. In fact, most of the above solutions relax one or more of these characteristics to provide a scaleable transaction processing model.

NewSQL to the rescue

There are some new emerging database systems that are designed from the ground up to operate in distributed environments called “NewSQL” databases.  Specifically,

  • Clustrix – is a MySQL compatible replacement, delivered as a hardware appliance that can be distributed across a number of nodes that retains fully ACID transaction compliance.
  • GenieDB – is a NoSQL and SQL based layered database that is consistent (atomic), available and partition tolerant (CAP) but not fully ACID compliant, offers a MySQL and popular content management systems plugins that allow MySQL and/or CMSs to execute using GenieDB clusters with minimal modification.
  • NimbusDB – is a client-cloud based SQL service which distributes copies of data across multiple nodes and offers a majority of SQL99 standard services.
  • VoltDB – is a fully SQL compatible, ACID compliant, distributed, in-memory database system offered as a software only solution executing on 64bit CentOS system but is compatible with any POSIX-compliant, 64bit Linux platform.
  • Xeround –  is a cloud based, MySQL compatible replacement delivered as a (Amazon, Rackspace and others) service offering that provides ACID compliant transaction processing across distributed nodes.

I might be missing some, but these seem to be the main ones today.  All the above seem to take a different tack to offer distributed SQL services.  Some of the above relax ACID compliance in order to offer distributed services. But for all of them distributed scale out performance is key and they all offer purpose built, distributed transactional relational database services.

—–

RDBMS technology has evolved over the last century and have had at least ~35 years of running major transactional systems. But todays hardware architecture together with web scale performance requirements stretch these systems beyond their original design envelope.  As such, NewSQL database systems have emerged to replace old SQL technology, with a new, intrinsically distributed system architecture providing high performing, scaleable transactional database services for today and the foreseeable future.

Comments?