Compressing information through the information bottleneck during deep learning

Read an article in Quanta Magazine (New theory cracks open the black box of deep learning) about a talk (see 18: Information Theory of Deep Learning, YouTube video) done a month or so ago given by Professor Naftali (Tali) Tishby on his theory that all deep learning convolutional neural networks (CNN) exhibit an “information bottleneck” during deep learning. This information bottleneck results in compressing the information present, in for example, an image and only working with the relevant information.

The Professor and his researchers used a simple AI problem (like recognizing a dog) and trained a deep learning CNN to perform this task. At the start of the training process the CNN nodes at the top were all connected to the next layer, and those were all connected to the next layer and so on until you got to the output layer.

Essentially, the researchers found that during the deep learning process, the CNN went from recognizing all features of an image to over time just recognizing (processing?) only the relevant features of an image when successfully trained.

Limits of deep learning CNNs

In his talk the Professor identifies two modes of operations of a deep learning CNN: the encoder layers and decoder layers. The encoder function identifies relevant information in the input and the decoder function takes this relevant information and maps this to an output.

This view results in two statistics that can characterize any deep learning CNN:

  • Sample complexity which refers to the the mutual information inside the last hidden layer of the encoder function, and
  • Accuracy or generalization error, which refers to the mutual information inside the last hidden layer of the decoder function.

Where mutual information is defined as how much of the uncertainty of an input is removed when you have an output that is based on that input. (See the talk for a more formal explanation).

The professor states that any complex deep learning CNN can be characterized by these two statistics where sample complexity determines the number of samples required and accuracy determines the precision by which the deep learning CNN can properly interpret those samples. The deep black line in the chart represents the limits of accuracy achievable at some number of training events, with some number of hidden layers and some sample set.

What happens during deep learning

Moreover, the professor shows an interesting characteristic of all CNNs is that they converge over time in accuracy and that convergence differs based mostly on the number of layers, sample size and training count used.

In the chart, the top row show 3 CNNs with different amounts of training data (5%, 40% and 80% of total). The chart shows the end result and trace of learning within the CNN over the same number of epochs (training cycles). More training data generates more accurate results.

The Professor views those epochs after the farthest right traces (where the trace essentially starts moving up and to the left in the chart), the compression phase of deep learning.

Statistics of deep learning process

The professor goes on to characterize the deep learning  process by calculating the mean and variance of each layers connection weights.

In the chart he shows an standard “eiffel tower” neural network, with 6 hidden layers, each with less neurons (nodes)  than the previous layer (12 nodes, 10 nodes, 7 nodes, etc.). And what he plots is the average weights and variance between layers (red lines are average and variance of the weights for arcs[connections] between nodes in layer 1 to nodes in layer 2, blue lines the mean and variance of weights for arcs between layer 2 and 3, purple lines the mean and variance of weights for arcs between layer 3 and 4, etc.).

He shows that at the start of training the (randomly assigned) weights for each layer have a normalized mean which is higher than its normalized variance. He calls this phase as high signal to noise (I would say the opposite, its low signal to noise, more noise than signal). But as training proceeds (over more epochs), there comes a point where the layer mean drops below its variance and the signal to noise ratio changes dramatically. After that point the mean weights and variance of the group of layers start to diverge or move apart.

The phase (epochs) after the line where the weights means are lower than its variance, he calls the Compression phase of the deep layer CNN training.

The Professor suggests that every complex deep learning CNN looks the same during training if you perform the calculations. The professor shows charts like this for other deep learning CNNs used on different problems and they all exhibit some point where their means are lower than their weights after which means and variances between layers starts to differentiate.

Do layer counts and sample size matter?


It turns out that the more hidden layers you have, the sooner (less training) you need to begin the compression phase. This chart shows the same problem, with different hidden layer counts. One can see in the traces, that not only is accuracy improved with more layers but it also more quickly reaches the compression phase.

Using his sample complexity and accuracy statistics, the Professor has also shown that their are limits to the amount of accuracy to any deep learning CNN based on the function of layer counts, sample size and training event counts.

~~~~

As far as I know, The Professor and his team are the first to try to characterize and understand what happens during deep learning. In doing so, he has shown that the number of layers and the number of samples can be used to predict the speed of learning. And ultimately how accurate any deep learning CNN can be.

Comments?

Research reveals ~liquid nitrogen temperature molecular magnets with 100X denser storage


Must be on a materials science binge these days. I read another article this week in Phys.org on “Major leap towards data storage at the molecular level” reporting on a Nature article “Molecular magnetic hysteresis at 60K“, where researchers from University of Manchester, led by Dr David Mills and Dr Nicholas Chilton from the School of Chemistry, have come up with a new material that provides molecular level magnetics at almost liquid nitrogen temperatures.

Previously, molecular magnets only operated at from 4 to 14K (degrees Kelvin) from research done over the last 25 years or so, but this new  research shows similar effects operating at ~60K or close to liquid nitrogen temperatures. Nitrogen freezes at 63K and boils at ~77K, and I would guess, is liquid somewhere between those temperatures.

What new material

The new material, “hexa-tert-butyldysprosocenium complex—[Dy(Cpttt)2][B(C6F5)4], with Cpttt = {C5H2tBu3-1,2,4} and tBu = C(CH3)3“, dysprosocenium for short was designed (?) by the researchers at Manchester and was shown to exhibit magnetism at the molecular level at 60K.

The storage effect is hysteresis, which is a materials ability to remember the last (magnetic/electrical/?) field it was exposed to and the magnetic field is measured in oersteds.

The researchers claim the new material provides magnetic hysteresis at a sweep level of 22 oersteds. Not sure what “sweep level of 22 oersteds” means but I assume a molecule of the material is magnetized with a field strength of 22 oersteds and retains this magnetic field over time.

Reports of disk’s death, have been greatly exaggerated

While there seems to be no end in sight for the densities of flash storage these days with 3D NAND (see my 3D NAND, how high can it go post or listen to our GBoS FMS2017 wrap-up with Jim Handy podcast), the disk industry lives on.

Disk industry researchers have been investigating HAMR, ([laser] heat assisted magnetic recording, see my Disk density hits new record … post) for some time now to increase disk storage density. But to my knowledge HAMR has not come out in any generally available disk device on the market yet. HAMR was supposed to provide the next big increase in disk storage densities.

Maybe they should be looking at CAMMR, or cold assisted magnetic molecular recording (heard it here, 1st).

According to Dr Chilton using the new material at 60K in a disk device would increase capacity by 100X. Western Digital just announced a 20TB MyBook Duo disk system for desktop storage and backup. With this new material, at 100X current densities, we could have 2PB Mybook Duo storage system on your desktop.

That should keep my ever increasing video-photo-music library in fine shape and everything else backed up for a little while longer.

Comments?

Photo Credit(s): Molecular magnetic hysteresis at 60K, Nature article

 

Materials science rescues civilization, again

Read a bunch of articles this past week from MIT Technology Review, How materials science will determine the future of human civilization, from Stanford University, New ultra thin semiconductor materials…, and Wired, This battery breakthrough could change everything.

The message varied a bit between articles but there was an underlying theme to all of them. Materials science was taking off, unlike it ever has before. Let’s take them on, one by one, last in first out.

New battery materials

I have not reported on new battery structures or materials in the past but it seems that every week or so I run across another article or two on the latest battery technology that will change everything. Yet this one just might do that.

I am no material scientist but Bill Joy has been investing in a company, Ionic Materials, for a while now (both in his job as a VC partner and as in independent invested) that has been working on a solid battery material that could be used to create rechargeable batteries.

The problems with Li(thium)-Ion batteries today are that they are a safety risk (lithium is a highly flammable liquid) and they use an awful lot of a relatively scarce mineral (lithium is mined in Chile, Argentina, Australia, China and other countries with little mined in USA). Electric cars would not be possible today with Li-On batteries.

Ionic Materials claim to have designed a solid polymer electrolyte that can combine the properties of familiar, ultra-safe alkaline batteries we use everyday and the recharge ability of  Li-Ion batteries used in phones and cars today. This would make a cheap, safe rechargeable battery that could work anywhere. The polymer just happens to also be fire retardant.

The historic problems with alkaline, essentially zinc and manganese dioxide is that they can’t be recharged too many times before they short out. But with the new polymer these batteries could essentially be recharged for as many times as Li-Ion today.

Currently, the new material doesn’t have as many recharge cycles as they want but they are working on it. Joy calls the material ional.

New semiconductor materials

Moore’s law will eventually cease. It’s only a question of time and materials.

Silicon is increasingly looking old in the tooth. As researchers shrink silicon devices down to atomic scales, they start to breakdown and stop functioning.

The advantages of silicon are that it is extremely scaleable (shrinkable) and easy to rust. Silicon rust or silicon dioxide was very important because it is used as an insulator. As an insulating layer, it could be patterned just like the silicon circuits themselves. That way everything (circuits, gates, switches and insulators) could all use the same, elemental material.

A couple of Stanford researchers, Eric Pop and Michal Mleczko, a electrical engineering professor and a post doc researcher, have discovered two new materials that may just take Moore’s law into a couple of more chip generations. They wrote about these new materials in their paper in Science Advances.

The new materials: hafnium diselenide and zirconium diselenide have many similar properties to silicon. One is that they can be easily made to scale. But devices made with the new materials still function at smaller geometries, at just three atoms thick (0.67nm) and also consume happen less power.

That’s good but they also rust better. When the new materials rust, they form a high-K insulating material. With silicon, high-K insulators required additional materials/processing and more than just simple silicon rust anymore. And the new materials also match Silicon’s band gap.

Apparently the next step with these new materials is to create electrical contacts. And I am sure as any new material, introduced to chip fabrication will take quite awhile to solver all the technical hurdles. But it’s comforting to know that Moore’s law will be around another decade or two to keep us humming away.

New multiferric materials

But just maybe the endgame in chip fabrication materials and possibly many other domains seems to be new materials coming out of ETH Zurich Switzerland.

There a researcher, Nicola Saldi,n has described a new sort of material that has both ferro-electric and ferro-magnetic properties.

Spaldin starts her paper off by discussing how civilization evolved mainly due to materials science.

Way in the past, fibers and rosin allowed humans to attach stone blades and other material to poles/arrows/axhandles to hunt  and farm better. Later, the discovery of smelting and basic metallurgy led to the casting of bronze in the bronze age and later iron, that could also be hammered, led to the iron age.  The discovery of the electron led to the vacuum tube. Pure silicon came out during World War II and led to silicon transistors and the chip fabrication technology we have today

Spaldin talks about the other major problem with silicon, it consumes lots of energy. At current trends, almost half of all worldwide energy production will be used to power silicon electronics in a couple of decades.

Spaldin’s solution to the  energy consumption problem is multiferric materials. These materials offer both ferro-electric and ferro-magnetic properties in the same materials.

Historically, materials were either ferro-electric or ferro-magnetic but never both. However, Spaldin discovered there was nothing in nature prohibiting the two from co-existing in the same material. Then she and her compatriots designed new multiferric materials that could do just that.

As I understand it, ferro-electric material allow electrons to form chemical structures which create electrical dipoles or electronic fields. Similarly, ferro-magnetic materials allow chemical structures to create magnetic dipoles or magnetic fields.

That is multiferric materials can be used to create both magnetic and electronic fields. And the surprising part was that the boundaries between multiferric magnetic fields (domains) form nano-scale, conducting channels which can be moved around using electrical fields.

Seems to me that if this were all possible and one could fabricate a substrate using multi-ferrics and write (program) any electronic circuit  you want just by creating a precise magnetic and electrical field ontop of it. And with todays disk and tape devices, precise magnetic fields are readily available for circular and linear materials. And it would seem just as easy to use multi multiferric material for persistent data storage.

Spaldin goes on to say that replacing magnetic fields in todays magnetism centric information/storage industry with electrical fields should lead to  reduced energy consumption.

Welcome to the Multiferric age.

Photo Credit(s): Battery Recycling by Heather Kennedy;

AMD Quad Core backside by Don Scansen;  and

Magnetic Field – 14 by Windell Oskay

Old world AI, Checkers, and The Champion

Read an article in The Atlantic this week (How checkers was solved) on Jonathan Schaeffer, the man who solved checkers, and his quest to beat Marion Tinsley, The Champion.

But first some personal history, while I was at university (back in the early 70’s) and first learned how to code in real (Fortran, 360/Assembler, IBM PL/I, Cobol) languages, one independent project I worked on was a checkers playing program. It made use of advanced alpha-beta search optimizations, board analysis routines and move trees.

These were the days of punched card decks and JCL, submitting programs to run as a batch job and getting results hours to days later. For one semester, I won the honor of consuming the most CPU time of any person in the school. I still have the card deck someplace but it may be hard to find a card reader, let alone a PL/I compiler/DOS system to run it.

In any case, better men than I have taken up the checkers challenge over time. And Schaeffer had made it his life’s work to conquer checkers and did it with his program, Chinook.

In my day checkers was a young kid and old person game. It was simple enough to learn but devilishly hard to master. My program got to look about 3.5 moves ahead, Schaeffer’s later program, used during an early match, was looking 16 moves ahead and was improved from there.

Besting The Champion

From the 50s through the early 90s there was one man who was the undisputed Champion of Checkers and that was Tinsley. Although he lost a few games during his time to other men, he never lost a match.

The article talks about how Schaeffer improved Chinook over time and at one time it had beaten Tinsley in two games but still lost the match. With a later version, it beat Tinsley a couple of times and then Tinsley fell ill and had to leave the game, later dying and forfeiting the match.

But even after Tinsley’s death, Schaeffer kept on improving Chinook.

Early on Schaeffer had a checkers endgame database and an opening database that were computed by Chinook as optimal move sequences from valid openings (professional checkers has a set of 3 move openings that players select at random and the game takes off from there) and endgames (positions with limited number’s of pieces to the end of the game).

These opening and endgame databases were stored for later retrieval during a game. This way if a game fell into a set opening or endgame the program could just follow the optimal play that was already computed.

Solving checkers

As computing power increased, Chinook’s end game database started earlier in the game with more pieces on the board and his opening database started working towards later into the game, following opening moves farther into the mid game.

When Schaeffer’s program solved checkers, essentially his opening database and his endgame database met in the middle of the game. And at that point he had the solution to every checkers position/game that could ever be.

AI vs. humans today

AI has changed to a different way of operating over time. When I was coding my checkers program, it was search trees/optimizations and board analysis. In fact, in 1996 IBM Deep Blue used variants of these techniques to beat Garry Kasparov, then World Chess Champion.

Today’s machine learning is less about search algorithms, game analyses, and game (or logic) databases and more about neural nets, machine learning and reinforcement learning.

New AI finally conquered Go only a couple of years ago, a game that’s very much more complex than checkers or chess. But in 2017 Google (Deepmind) AlphaGo didn’t use search trees and board analyses, it used neural nets, machine learning and reinforcement learning to beat Ke Jie, the then World #1 ranked Go Master.

Welcome to the new world of AI.

Photo Credit(s):

New chip architecture with CPU, storage & sensors in one package

Read an article the other day in MIT news, (3D chip combines computing and data storage) about a new 3D chip out of Stanford and MIT research, which includes CPU, RRAM (resistive RAM) storage class memories and sensors in one single package. Such a chip architecture vastly minimizes the off chip bottleneck to access storage and sensors.

Chip componentry

The chip’s sensors are based on carbon nanotubes. Aside from a layer of silicon at the bottom, all the rest of transistors used in the chip are also based off of carbon nanotube FET (field effect transistors).

The RRAM storage class memory is a based on a dielectric material which uses electrical resistance to store non-volatile data.

The bottom layer is a silicon based CPU. On top of the silicon is a carbon nanotube layer. Next comes the RRAM and the top layer is more carbon nanotubes making up the sensor array.

Architectural benefits

One obvious benefit is having data storage directly accessible to the CPU is that there’s no longer a need to go off chip to access data. The 2nd major advantage to the chip architecture is that the sensor array can write directly to RRAM storage, so there’s no off chip delay to provide sensor readout and storage.

Another advantage to using carbon nanotube FET’s is that they can be an order of magnitude more energy efficient than silicon transistors. Moreover, RRAM has the potential to be much denser than DRAM.

Finally, another major advantage is that this can all be built in one 3D chip because carbon nanotube and RRAM fabrication can be done at relatively cooler temperatures (~200C) vs. silicon fabrication which requires relatively high temperatures (1000C). Silicon cannot be readily fabricated in multiple layers because of the high temperatures required which will harm lower layers. But you could fabricate the lowest layer in silicon and then the rest as either carbon nanotube FETs or RRAM without harming the silicon layer.

Transistor/RRAM counts

The chip as fabricated has a million RRAM cells (bits?) and 2 million nanotube FETs. In contrast, in 2014, Intel’s 15-core Xeon Ivy Bridge EX had 4.3B transistors and current DRAM chips offer 64Gb. So there’s a ways to go before carbon nanotube and RRAM densities can get to a level available from silicon today.

However, as they have a bottom layer of silicon they can have all the CPU complexity of an Intel processor and still build RRAM and carbon nanotubes FETs on top of that. Which makes this chip architecture compatible with current CMOS fabrication techniques and a very interesting addition to current CPU architectures.

~~~~

Unclear to me why they stopped at 4 layers (1-silicon FET, 1 carbon nanotubes FET, 1 RRAM and 1 carbon nanotubes FET [sensor array]). If they can do 4 why not do 5 or more. That way they could pack in even more RRAM storage and perhaps more sensor layers.

Also, not sure what the bottom most layer of carbon nanotubes is doing. If I had to hazard a guess, it’s being used for RRAM control logic. But I could be wrong.

I could see how these chips could be used for very specialized sensor applications, with a limited need for data storage. The researchers claim many types of sensors can be created using carbon nanotubes. If that’s the case, maybe we might see these sorts of chips showing up all over the place.

Comments?

Photo Credit(s): Three dimensional integration of nanotechnologies for computing and data storage on a single chip, Nature magazine. 

Google releases new Cloud TPU & Machine Learning supercomputer in the cloud

Last year about this time Google released their 1st generation TPU chip to the world (see my TPU and HW vs. SW … post for more info).

This year they are releasing a new version of their hardware called the Cloud TPU chip and making it available in a cluster on their Google Cloud.  Cloud TPU is in Alpha testing now. As I understand it, access to the Cloud TPU will eventually be free to researchers who promise to freely publish their research and at a price for everyone else.

What’s different between TPU v1 and Cloud TPU v2

The differences between version 1 and 2 mostly seem to be tied to training Machine Learning Models.

TPU v1 didn’t have any real ability to train machine learning (ML) models. It was a relatively dumb (8 bit ALU) chip but if you had say a ML model already created to do something like understand speech, you could load that model into the TPU v1 board and have it be executed very fast. The TPU v1 chip board was also placed on a separate PCIe board (I think), connected to normal x86 CPUs  as sort of a CPU accelerator. The advantage of TPU v1 over GPUs or normal X86 CPUs was mostly in power consumption and speed of ML model execution.

Cloud TPU v2 looks to be a standalone multi-processor device, that’s connected to others via what looks like Ethernet connections. One thing that Google seems to be highlighting is the Cloud TPU’s floating point performance. A Cloud TPU device (board) is capable of 180 TeraFlops (trillion or 10^12 floating point operations per second). A 64 Cloud TPU device pod can theoretically execute 11.5 PetaFlops (10^15 FLops).

TPU v1 had no floating point capabilities whatsoever. So Cloud TPU is intended to speed up the training part of ML models which requires extensive floating point calculations. Presumably, they have also improved the ML model execution processing in Cloud TPU vs. TPU V1 as well. More information on their Cloud TPU chips is available here.

So how do you code a TPU?

Both TPU v1 and Cloud TPU are programmed by Google’s open source TensorFlow. TensorFlow is a set of software libraries to facilitate numerical computation via data flow graph programming.

Apparently with data flow programming you have many nodes and many more connections between them. When a connection is fired between nodes it transfers a multi-dimensional matrix (tensor) to the node. I guess the node takes this multidimensional array does some (floating point) calculations on this data and then determines which of its outgoing connections to fire and how to alter the tensor to send to across those connections.

Apparently, TensorFlow works with X86 servers, GPU chips, TPU v1 or Cloud TPU. Google TensorFlow 1.2.0 is now available. Google says that TensorFlow is in use in over 6000 open source projects. TensorFlow uses Python and 1.2.0 runs on Linux, Mac, & Windows. More information on TensorFlow can be found here.

So where can I get some Cloud TPUs

Google is releasing their new Cloud TPU in the TensorFlow Research Cloud (TFRC). The TFRC has 1000 Cloud TPU devices connected together which can be used by any organization to train machine learning algorithms and execute machine learning algorithms.

I signed up (here) to be an alpha tester. During the signup process the site asked me: what hardware (GPUs, CPUs) and platforms I was currently using to training my ML models; how long does my ML model take to train; how large a training (data) set do I use (ranging from 10GB to >1PB) as well as other ML model oriented questions. I guess there trying to understand what the market requirements are outside of Google’s own use.

Google’s been using more ML and other AI technologies in many of their products and this will no doubt accelerate with the introduction of the Cloud TPU. Making it available to others is an interesting play but this would be one way to amortize the cost of creating the chip. Another way would be to sell the Cloud TPU directly to businesses, government agencies, non government agencies, etc.

I have no real idea what I am going to do with alpha access to the TFRC but I was thinking maybe I could feed it all my blog posts and train a ML model to start writing blog post for me. If anyone has any other ideas, please let me know.

Comments?

Photo credit(s): From Google’s website on the new Cloud TPU

 

Disaster recovery from VMware to AWS using Dell EMC Avamar & Data Domain

avI was at Dell EMC World2017 last week and although most of the news was on Dell’s new 14th generation server and Dell-EMC integration progress, Wednesday’s keynote was devoted to storage and non-server infrastructure news.

There was plenty of non-server news but one item that caught my attention was new functionality from Dell EMC Data Protection Division that used Avamar and Data Domain to provide disaster recovery for VMware VMs directly to AWS.

Data Domain (AWS) Cloud DR

Dell EMC Data Domain Cloud DR (DDCDR) is  a new capability that enables DD to backup to AWS S3 object storage and when needed restart the virtual machines within AWS.

DDCDR requires that a customer with Avamar backup and Data Domain (DD) storage install an OVA which deploys an “add-on” to their on-prem Avamar/DD system and install a lightweight VM (Cloud DR server) utility in their AWS domain.

Once the OVA is installed, it will read the changed data and will segment, encrypt, and compress the backup data and then send this and the backup metadata to AWS S3 objects. Avamar/DD policies can be established to control how many daily backup copies are to be saved to S3 object storage. There’s no need for Data Domain or Avamar to run in AWS.

When there’s a problem at the primary data center, an admin can click on a Avamar GUI button and have the Cloud DR server, uncompress, decrypt, rehydrate and restore the backup data into EBS volumes, translate the VMware VM image to an AMI image and then restarts the AMI on an AWS virtual server (EC2) with its data on EBS volume storage. The Cloud DR server will use the backup metadata to select the AWS EC2 instance with the proper CPU and RAM needed to run the application. Once this completes, the VM is running standalone, in an AWS EC2 instance. Presumably, you have to have EC2 and EBS storage volumes resources available under your AWS domain to be able to install the application and restore its data.

For simplicity purposes, the user can control almost all of the required functionality for DDCDR from the Avamar GUI alone. But in case of a site outage, the user can initiate the application DR from a portal supplied by the Cloud DR server utility.

There you have it, simplified, easy to use (AWS) Cloud DR for your VM applications all through Dell EMC Avamar, Data Domain storage and DDCDR. At the moment, it only works with AWS cloud but it’s likely to be available for other public clouds in the near future.

~~~~

There was much more infrastructure news at Dell EMC World2017. I’ll discuss more details on their new storage offerings in my upcoming Storage Intelligence newsletter, due out the end of this month. If your interested in receiving your own copy of my newsletter, checkout the signup button in the upper right of this page.

Comments?

[Edits were made for readability and technical accuracy after this post was published. Ed]

Crowdsourced vision for visually impaired

Read an article the other day in Christian Science Monitor (CSM) on the Be My Eyes App. The app is from BeMyEyes.com and is available for the iPhone and Android smart phones.

Essentially there are two groups of people that use the app:

  • Visually helpful volunteers – these people signup for the app and when a visually impaired person needs help they provide visual aid by speaking to the person on the other end.
  • Visually impaired individuals – these people signup for the app and when they are having problems understanding what they are (or are not) looking at they can turn on their camera take video with their phone and it will be sent to a volunteer, they can then ask the volunteer for help in deciding what they are looking at.

So, the visually impaired ask questions about the scenes they are shooting with their phone camera and volunteers will provide an answer.

It’s easy to register as Sighted and I assume Blind. I downloaded the app, registered and tried a test call in minutes. You have to enable notifications, microphone access and camera access on your phone to use the app. The camera access is required to display the scene/video on your phone.

According to the app there are 492K sighted individuals, 34.1K blind individuals and they have been helped 214K times.

Sounds like an easy way to help the world.

There was no requests to identify a language to use, so it may only work for English speakers. And there was no way to disable/enable it for a period of time when you don’t want to be disturbed. But maybe you would just close the app.

But other than that it was simple to use and seemed effective.

Now if there were only an app that would provide the same service for the hearing impaired to supply captions or a “filtered” audio feed to ear buds.

The world need more apps like this…

Comments