This talk will give a brief history of deep learning architectures, moving into modern trends and research in the field. Key points of discussion will be neural activation functions, weight optimization strategies, techniques for hyper-parameter selection, and example architectures for different problem sets. We will finish with a few notable examples of "web scale" deep learning at work.
This talk will be 30 minutes and will focus on (briefly) sklearn, Theano, pylearn2, theanets, and hyperopt.
Status: Accepted