
I was at the NetApp Insight conference last week and recorded a podcast (see: GreyBeards Podcast) on what NetApp is doing in the AI DL (Deep Learning) space. On the podcast, we talked about a number of verticals that were deploying AI DL right now and using it to improve outcomes.
It was only is 2012 that AI DL broke out and pretty much conquered the speech recognition contest by improving recognition accuracy by leaps and bounds. Prior to that improvements had been very small and incremental at best. Here we are, just 7 years later and AI DL models are proliferating across industry and every other sector of the world economy.
DL applications in the real world
At the show. we talked about AI DL models being used in healthcare (radiological image analysis, cell counts for infection assessments), automotive (self driving cars), financial services (fraud detection), and retail (predicting how make up would look on someone).
And early this year, at HPE Discover, they discussed a new technique to share training data but still keep it private. In this case, they use block chain technology to publish and share a DL neural network model weights and other hyper parameters trained for some real world purpose.
Customers download and use the model in their day to day activities but record the data that their model analyzes and its predictions. They use this data to update (re-train) their DL neural net. They then publish their new neural net model weights and other parameters to all the other customers. Each customer of the model do the same, updating (re-training) their DL neural net.
At some point an owner or global model arbitrator takes all these individual model updates and aggregates the neural net weights, into a new neural net model and publishes the new model. And then the process starts over again. In this way, training data is never revealed, kept secure and private but DL model updates that result from re-training the model with secured private data would be available to any customer.

Recently, there’s been a slew of articles across many different organizations that show how AI DL is being adopted to work in different areas:
- AI DL is helping to perform better driver license exams, using camera’s in test cars, to monitor eye movement and car sensor recordings, an AI DL model can tell whether drivers are signaling properly, scanning properly, looking for blind spots, etc. (See Driving license test just got smarter)
- AI DL facial recognition models are helping researchers understand how facial recognition works in real brains (See: Artificial networks shed light on human face recognition).
- AI DL models can be trained to be produce creative art using text and image analysis (See Creativity and AI: the next step).
- AI DL can be used to highlight research on emerging topics (see Spotting cutting-edge topics in scientific research using keyword analysis)
- AI DL can be used by anyone to boost social media posts (see Boosting the popularity of social media posts).
And that’s just a sample of the last few weeks of papers of AI DL activity.
Next Steps

All it takes is data, that can be quantified and classified. With data and classifications in hand, anyone can train a DL model that performs that classification. It doesn’t require GPU farms, decent CPUs are up to the task for TB of data.
But if you want better prediction/classificatoin accuracy, you will need more data which means longer AI DL training runs. So at some point, maybe at >100TB of data, or use AI DL training a lot, you may want that GPU farm.
The Deep Learning with Python book (my favorite) has a number of examples such as, sentiment analysis of text, median real estate pricing predictions, generating text that looks like an authors work, with maybe a dozen more that one can use to understand AI DL technology. But it’s not rocket science, I believe any qualified programmer could do it, with some serious study.
So the real question is what are you doing with your data to make use of AI DLmodels now?
I suppose the other question ought to be, how can you collect more data and classification information, to train more AI DL models?
~~~~
It’s great to be in the storage business.
Photo Credit(s):
- “Neural Network : basic scheme with legends” by fdecomite is licensed under CC BY 2.0
- “aix360-usage-tree“ by IBM Research is licensed under CC BY-ND 2.0
- “Colorpong.com – Neurones vector bundle“ by Karol Gadzala, Colorpong .com is licensed under CC BY-NC-ND 4.0