NIPS is the premier conference on Deep Learning. Given the accelerating state of the art, it’s interesting to see what is new.
The paper list is available from http://www.dlworkshop.org/accepted-papers. These are the papers that stood out to me (or at least matched my interests).
cuDNN: Efficient Primitives for Deep Learning: A library from nVidia for deep learning on GPUs. ~36% speedup on training using a K40. Has Caffe integration (which has quickly become the standard Deep Learning library).
Distilling the Knowledge in a Neural Network:  Compressing complex knowledge representations into simpler models. I need to read this a few more times to understand it well enough to comment sensibly. By Oriol Vinyals, Geoffrey Hinton and Jeff Dean, so there’s that…
Learning Word Representations with Hierarchical Sparse Coding: From the Ark group at CMU. An alternative to Word2Vec for understanding word semantics. Results seems roughly comparable to Word2Vec (which is good! Word2Vec is pretty much one of the miracles of the age.) There is a claim that training is significantly faster than previous methods: 2 hours to train vs 6.5 for Word2Vec on the same 6.8 billion word corpus. See the Paragraph Vectors paper as well.
Deep Learning as an Opportunity in Virtual Screening: Deep Learning for drug screening. I know nothing about drug screening, but this seems pretty significant:
Deep learning outperformed all other methods with respect to the area under ROC curve and was significantly better than all commercial products. Deep
learning surpassed the threshold to make virtual compound screening possible and
has the potential to become a standard tool in industrial drug design.
Document Embedding with Paragraph Vectors: “Lady Gaga” – “American” + “Japanese” = “Ayumi Hamasaki“. Ayumi Hamasaki “has been dubbed the “Empress of J-Pop” because of her popularity and widespread influence in Japan and throughout Asia… Since her 1998 debut with the single recording “Poker Face”,Hamasaki has sold over 53 million records in Japan, ranking her among the best-selling recording artists in the country“.
Explain Images with Multimodal Recurrent Neural Networks: Image captioning. There’s been a few papers on this floating around lately, seemingly in preparation for CVPR 2015. This one is from UCLA & Baidu. Others include Stanford, Google, Berkeley, Microsoft and University of Toronto. Someone should go and compare them all..
Deep Learning for Answer Sentence Selection: Selecting sentences that answer a question using word/sentence semantics only. This is a weird approach, but appears to work well in some circumstances. I don’t think it would be possible to build a QA system around it, but it could be a useful adjunct to other more traditional methods.