Image
Image
Image
Image
Image
Image
Image
Image
Image
Image



Search
»

Seminar abstract

Deep architectures and folding

Lech Szymanski
Doctor
University of Otago

Abstract: There is no doubt that in recent years deep neural networks have pushed the envelope of the state of the art machine learning. But, as is typical of artificial neural network models, a huge share of this success is due to the uncanny, often unexplainable, intuition of the expert users, who make decisions about the network architecture for a given problem. We need to understand these models at a far deeper level if we're going to push the performance envelope even further. In this talk I will present my research on the fundamental differences between internal representation in shallow and deep architectures, with the aim of establishing how and when the latter can be better. Specifically, I will discuss one type of deep representation that is analogous to the folding of the input space, and how effective it can be at approximation of functions with repeating patterns.

Bio: Dr Lech Szymanski is a lecturer at the department of Computer Science, University of Otago, New Zealand. His main research interests include machine learning, deep representations and connectionist models, especially with applications to computer vision. Before his PhD, which he completed in 2012, he worked as a software engineer for a wireless telecommunications company in Ottawa, Canada.
  Name Size

Image
PoweredBy © LAMDA, 2022