The future of deep learning ...

You may be interested in this IEEE Spectrum post by Neil C. Thompson (MIT Computer Science and AI Lab):

 https://spectrum.ieee.org/deep-learning-computational-cost <https://spectrum.ieee.org/deep-learning-computational-cost> 

It notes that the search for greater accuracy comes with the need for much greater computational power and a huge carbon footprint.

> So the good news is that deep learning provides enormous flexibility. The bad news is that this flexibility comes at an enormous computational cost. This unfortunate reality has two parts. The first part is true of all statistical models: To improve performance by a factor of k, at least k^2 more data points must be used to train the model. The second part of the computational cost comes explicitly from overparameterization. Once accounted for, this yields a total computational cost for improvement of at least k^4. That little 4 in the exponent is very expensive: A 10-fold improvement, for example, would require at least a 10,000-fold increase in computation.


Jacob Moore adds in a separate commentary:

> Exponentially rising costs with diminishing returns to performance. It’s a recipe for another so-called AI Winter.


and

> Neuro-symbolic approaches might be the missing link between human understanding and autonomous learning. It’s far too early to tell. The only thing we can be certain of right now — the deep learning community is in for a reckoning. It might not be today or tomorrow, but winter is coming.

See: https://towardsdatascience.com/the-future-of-deep-learning-7e8574ad6ae3 <https://towardsdatascience.com/the-future-of-deep-learning-7e8574ad6ae3> 

I am optimistic as I believe that if we mimic what we already know about the human brain, we can build systems that learn in a far more scalable way along with greater transparency. This will mean replacing deep learning's black box with a more structured approach that supports saliency and which integrates with symbolic knowledge.

I am currently trying to do just that using a hybrid approach to mapping words to meaning that combines statistical and taxonomic knowledge, and which also supports incremental learning with the means to invoke rule-based cognition. The aim is to demonstrate how modest linguistic resources can be used to bootstrap natural language understanding with a view to then demonstrating how language can be used to teach classroom skills.

Another promising avenue for research would be to work on scene understanding for computer vision with a view to improving attention to salient information, and greater robustness against varying conditions. Understanding is much more than classification, as it involves the use of taxonomic and causal knowledge to explain what is being seen. This points to the need for hybrid approaches that mimic human perception and cognition.

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Tuesday, 28 September 2021 09:30:43 UTC