Research Units: Hilbert J. Kappen, Radboud University, the Netherlands (PI); Riccardo Zecchina, Bocconi Institute of Data Science and Analytics (sub-awardee/partner).
Abstract: In recent years methods from machine learning have made great advances to solve difficult problems in artificial intelligence. In particular (deep) feed-forward neural networks have re-established themselves as one of the most powerful learning architectures. This has led to spectacular applications in the area of computer vision speech recognition and game playing. The current successes are pushing the technology to larger and larger applications. This motivates development of novel learning methods that address the challenges of learning in networks of increasing size and complexity. A second motivation for this research proposal is the renewed interest in neuro-morphic hardware designs ranging from specialized CMOS-based accellerator platforms to emerging nanoscale technologies using memristors and optical computing. These neurally inspired devices aim to benefit from the efficient design of the brain. In particular to fight the von Neumann bottleneck by using local computation as much as possible and by using low precision computing elements to save power consumption. The objective of this research proposal is to address fundamental problems in feedforward and recurrent neural networks learning. Our approach is to model neural networks as Markov processes and to describe learning as a stochastic optimal control problem.