


default search action
5th ICLR 2017: Toulon, France
- 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net 2017

Paper decision: Accept (Oral)
- Jonathon Cai, Richard Shin, Dawn Song:

Making Neural Programming Architectures Generalize via Recursion. - Johannes Ballé, Valero Laparra, Eero P. Simoncelli:

End-to-end Optimized Image Compression. - Sachin Ravi, Hugo Larochelle:

Optimization as a Model for Few-Shot Learning. - Antoine Bordes, Y-Lan Boureau, Jason Weston:

Learning End-to-End Goal-Oriented Dialog. - Martín Arjovsky, Léon Bottou:

Towards Principled Methods for Training Generative Adversarial Networks. - Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo

, David Silver, Koray Kavukcuoglu:
Reinforcement Learning with Unsupervised Auxiliary Tasks. - Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni:

Multi-Agent Cooperation and the Emergence of (Natural) Language. - Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals:

Understanding deep learning requires rethinking generalization. - Barret Zoph, Quoc V. Le:

Neural Architecture Search with Reinforcement Learning. - Shixiang Gu, Timothy P. Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine:

Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic. - Alexey Dosovitskiy, Vladlen Koltun:

Learning to Act by Predicting the Future. - Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang:

On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. - Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, Kunal Talwar:

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. - Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár:

Amortised MAP Inference for Image Super-resolution. - Daniel D. Johnson:

Learning Graphical State Transitions.
Paper decision: Accept (Poster)
- Gabriel Loaiza-Ganem, Yuanjun Gao, John P. Cunningham:

Maximum Entropy Flow Networks. - C. Daniel Freeman, Joan Bruna:

Topology and Geometry of Half-Rectified Network Optimization. - Sergey Zagoruyko, Nikos Komodakis:

Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer. - Alex X. Lee, Sergey Levine, Pieter Abbeel:

Learning Visual Servoing with Deep Features and Fitted Q-Iteration. - Carlos Florensa, Yan Duan, Pieter Abbeel:

Stochastic Neural Networks for Hierarchical Reinforcement Learning. - George Philipp, Jaime G. Carbonell:

Nonparametric Neural Networks. - Jimmy Ba, Roger B. Grosse, James Martens:

Distributed Second-Order Optimization using Kronecker-Factored Approximations. - Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf:

Pruning Filters for Efficient ConvNets. - Florian Bordes, Sina Honari, Pascal Vincent:

Learning to Generate Samples from Noise through Infusion Training. - Xingyi Li, Fuxin Li, Xiaoli Z. Fern, Raviv Raich:

Filter shaping for Convolutional Neural Networks. - Mengye Ren, Renjie Liao, Raquel Urtasun, Fabian H. Sinz, Richard S. Zemel:

Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes. - Eleanor Batty, Josh Merel, Nora Brackbill, Alexander Heitman, Alexander Sher, Alan M. Litke, E. J. Chichilnisky, Liam Paninski:

Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses. - David Warde-Farley, Yoshua Bengio:

Improving Generative Adversarial Networks with Denoising Feature Matching. - Minmin Chen:

Efficient Vector Representation for Documents through Corruption. - Abhishek Gupta, Coline Devin, Yuxuan Liu, Pieter Abbeel, Sergey Levine:

Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning. - Xingyu Lin, Hao Wang, Zhihao Li, Yimeng Zhang, Alan L. Yuille, Tai Sing Lee:

Transfer of View-manifold Learning to Similarity Perception of Novel Objects. - Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge:

What does it take to generate natural textures? - Brian Cheung, Eric Weiss, Bruno A. Olshausen:

Emergence of foveal image sampling from learning to attend in visual scenes. - Wentao Huang, Kechen Zhang:

An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax. - Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma:

PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications. - Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li:

Mode Regularized Generative Adversarial Networks. - Klaus Greff, Rupesh Kumar Srivastava, Jürgen Schmidhuber:

Highway and Residual Networks learn Unrolled Iterative Estimation. - Edouard Grave, Armand Joulin, Nicolas Usunier:

Improving Neural Language Models with a Continuous Cache. - Yaniv Taigman, Adam Polyak, Lior Wolf:

Unsupervised Cross-Domain Image Generation. - Bradly C. Stadie, Pieter Abbeel, Ilya Sutskever:

Third Person Imitation Learning. - Sanjay Purushotham, Wilka Carvalho, Tanachat Nilanon, Yan Liu:

Variational Recurrent Adversarial Deep Domain Adaptation. - Pavol Bielik, Veselin Raychev, Martin T. Vechev:

Program Synthesis for Character Level Language Modeling. - Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala:

Episodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement. - Karen Ullrich, Edward Meeds, Max Welling:

Soft Weight-Sharing for Neural Network Compression. - Chengtao Li, Daniel Tarlow, Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman:

Neural Program Lattices. - Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun:

Tracking the World State with Recurrent Entity Networks. - Taco S. Cohen, Max Welling:

Steerable CNNs. - Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum, Joseph P. Bigus, Murray Campbell, Ban Kawas, Kartik Talamadupula, Gerry Tesauro, Satinder Singh:

Learning to Query, Reason, and Answer Questions On Ambiguous Texts. - William Lotter, Gabriel Kreiman, David D. Cox:

Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning. - Adriana Romero, Pierre Luc Carrier, Akram Erraqabi, Tristan Sylvain, Alex Auvolat, Etienne Dejoie, Marc-André Legault, Marie-Pierre Dubé, Julie G. Hussin, Yoshua Bengio:

Diet Networks: Thin Parameters for Fat Genomics. - Timothy Dozat, Christopher D. Manning:

Deep Biaffine Attention for Neural Dependency Parsing. - Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taïga, Francesco Visin, David Vázquez, Aaron C. Courville:

PixelVAE: A Latent Variable Model for Natural Images. - Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger:

Snapshot Ensembles: Train 1, Get M for Free. - Yuxin Wu, Yuandong Tian:

Training Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning. - Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, Pushmeet Kohli:

Neuro-Symbolic Program Synthesis. - Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee:

Decomposing Motion and Content for Natural Video Sequence Prediction. - Harrison Edwards, Amos J. Storkey:

Towards a Neural Statistician. - Danica J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alexander J. Smola, Arthur Gretton:

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy. - Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine:

Generalizing Skills with Semi-Supervised Reinforcement Learning. - Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, Frank Hutter:

Learning Curve Prediction with Bayesian Neural Networks. - Ke Li, Jitendra Malik:

Learning to Optimize. - Shuohang Wang, Jing Jiang:

A Compare-Aggregate Model for Matching Text Sequences. - Ziang Xie, Sida I. Wang, Jiwei Li, Daniel Lévy, Aiming Nie, Dan Jurafsky, Andrew Y. Ng:

Data Noising as Smoothing in Neural Network Language Models. - Shengjie Wang, Haoran Cai, Jeff A. Bilmes, William S. Noble:

Training Compressed Fully-Connected Networks with a Density-Diversity Penalty. - Akash Srivastava, Charles Sutton:

Autoencoding Variational Inference For Topic Models. - Akshay Balsubramani:

Optimal Binary Autoencoding with Pairwise Correlations. - Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger B. Grosse:

On the Quantitative Analysis of Decoder-Based Generative Models. - Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally:

Trained Ternary Quantization. - Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro, William J. Dally:

DSD: Dense-Sparse-Dense Training for Deep Neural Networks. - Michael Chang, Tomer D. Ullman, Antonio Torralba, Joshua B. Tenenbaum:

A Compositional Object-Based Approach to Learning Physical Dynamics. - Lukasz Kaiser, Ofir Nachum, Aurko Roy, Samy Bengio:

Learning to Remember Rare Events. - Zhilin Yang, Ruslan Salakhutdinov, William W. Cohen:

Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks. - Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, Ruslan Salakhutdinov:

Words or Characters? Fine-grained Gating for Reading Comprehension. - Sanjeev Arora, Yingyu Liang, Tengyu Ma:

A Simple but Tough-to-Beat Baseline for Sentence Embeddings. - Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo:

Capacity and Trainability in Recurrent Neural Networks. - Misha Denil, Pulkit Agrawal, Tejas D. Kulkarni, Tom Erez, Peter W. Battaglia, Nando de Freitas:

Learning to Perform Physics Experiments via Deep Reinforcement Learning. - Ofir Nachum, Mohammad Norouzi, Dale Schuurmans:

Improving Policy Gradient by Exploring Under-appreciated Rewards. - Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, Peter Norvig:

Deep Learning with Dynamic Computation Graphs. - Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard H. Hovy, Aaron C. Courville:

Calibrating Energy-based Generative Adversarial Networks. - Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz:

Pruning Convolutional Neural Networks for Resource Efficient Inference. - Min Joon Seo, Sewon Min, Ali Farhadi, Hannaneh Hajishirzi:

Query-Reduction Networks for Question Answering. - Bowen Baker, Otkrist Gupta, Nikhil Naik, Ramesh Raskar:

Designing Neural Network Architectures using Reinforcement Learning. - Shuohang Wang, Jing Jiang:

Machine Comprehension Using Match-LSTM and Answer Pointer. - Tian Zhao, Xiaobing Huang, Yu Cao:

DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning. - Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi:

Bidirectional Attention Flow for Machine Comprehension. - Guillaume Berger, Roland Memisevic:

Incorporating long-range consistency in CNN-based texture generation. - Caiming Xiong, Victor Zhong, Richard Socher:

Dynamic Coattention Networks For Question Answering. - Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron C. Courville, Yoshua Bengio:

SampleRNN: An Unconditional End-to-End Neural Audio Generation Model. - Jessica B. Hamrick, Andrew J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, Peter W. Battaglia:

Metacontrol for Adaptive Imagination-Based Optimization. - Sharan Narang, Greg Diamos, Shubho Sengupta, Erich Elsen:

Exploring Sparsity in Recurrent Neural Networks. - Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár:

Lossy Image Compression with Compressive Autoencoders. - Yoon Kim, Carl Denton, Luong Hoang, Alexander M. Rush

:
Structured Attention Networks. - David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron C. Courville, Christopher J. Pal:

Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations. - Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, David M. Blei:

Deep Probabilistic Programming. - Jianwei Yang, Anitha Kannan, Dhruv Batra, Devi Parikh:

LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation. - Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, Pieter Abbeel:

Variational Lossy Autoencoder. - Thomas Laurent, James von Brecht:

A recurrent neural network without chaos. - Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, Jeff Dean:

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. - David Alvarez-Melis, Tommi S. Jaakkola:

Tree-structured decoding with doubly-recurrent neural networks. - Abhishek Sinha, Aahitagni Mukherjee, Mausoom Sarkar, Balaji Krishnamurthy:

Introspection: Accelerating Neural Network Training By Learning Weight Evolution. - Lisha Li, Kevin G. Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar:

Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization. - Greg Yang, Alexander M. Rush

:
Lie-Access Neural Turing Machines. - James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher:

Quasi-Recurrent Neural Networks. - Silvia Chiappa, Sébastien Racanière, Daan Wierstra, Shakir Mohamed:

Recurrent Environment Simulators. - Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, Sergey Levine:

EPOpt: Learning Robust Neural Network Policies Using Model Ensembles. - Janarthanan Rajendran, Aravind S. Lakshminarayanan, Mitesh M. Khapra, P. Prasanna, Balaraman Ravindran:

Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain. - Wanjia He, Weiran Wang, Karen Livescu:

Multi-view Recurrent Neural Acoustic Word Embeddings. - John Thickstun, Zaïd Harchaoui, Sham M. Kakade:

Learning Features of Music From Scratch. - Dan Hendrycks, Kevin Gimpel:

A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. - Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H. S. Torr, Pushmeet Kohli:

Learning to superoptimize programs. - Leonard Berrada, Andrew Zisserman, M. Pawan Kumar:

Trusting SVM for Piecewise Linear CNNs. - Peter O'Connor, Max Welling:

Sigma Delta Quantized Networks. - Zhouhan Lin, Minwei Feng, Cícero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, Yoshua Bengio:

A Structured Self-Attentive Sentence Embedding. - Pau Rodríguez, Jordi Gonzàlez, Guillem Cucurull, Josep M. Gonfaus, F. Xavier Roca:

Regularizing CNNs with Locally Constrained Decorrelations. - Chris J. Maddison, Andriy Mnih, Yee Whye Teh:

The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. - Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein:

Unrolled Generative Adversarial Networks. - Adji B. Dieng, Chong Wang, Jianfeng Gao, John W. Paisley:

TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency. - Michal Daniluk, Tim Rocktäschel, Johannes Welbl, Sebastian Riedel:

Frustratingly Short Attention Spans in Neural Language Modeling. - Hanjun Dai, Bo Dai, Yan-Ming Zhang, Shuang Li, Le Song:

Recurrent Hidden Semi-Markov Model. - Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt:

Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data. - Ishan P. Durugkar, Ian Gemp, Sridhar Mahadevan:

Generative Multi-Adversarial Networks. - Çaglar Gülçehre, Marcin Moczulski, Francesco Visin, Yoshua Bengio:

Mollifying Networks. - Irina Higgins, Loïc Matthey, Arka Pal, Christopher P. Burgess, Xavier Glorot

, Matthew M. Botvinick, Shakir Mohamed, Alexander Lerchner:
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. - Samuel L. Smith, David H. P. Turban, Steven Hamblin, Nils Y. Hammerla:

Offline bilingual word vectors, orthogonal transformations and the inverted softmax. - Luisa M. Zintgraf, Taco S. Cohen, Tameem Adel, Max Welling:

Visualizing Deep Neural Network Decisions: Prediction Difference Analysis. - Eric Jang, Shixiang Gu, Ben Poole:

Categorical Reparameterization with Gumbel-Softmax. - Priyank Jaini, Zhitang Chen, Pablo Carbajal, Edith Law, Laura Middleton, Kayla Regan, Mike Schaekermann, George Trimponias, James Tung, Pascal Poupart:

Online Bayesian Transfer Learning for Sequential Data Modeling. - William Chan, Yu Zhang, Quoc V. Le, Navdeep Jaitly:

Latent Sequence Decompositions. - Hang Qi, Evan Randall Sparks, Ameet Talwalkar:

Paleo: A Performance Model for Deep Neural Networks. - Brendan O'Donoghue, Rémi Munos, Koray Kavukcuoglu, Volodymyr Mnih:

Combining policy gradient and Q-learning. - Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio:

Density estimation using Real NVP. - Tim Cooijmans, Nicolas Ballas, César Laurent, Çaglar Gülçehre, Aaron C. Courville:

Recurrent Batch Normalization. - Ilya Loshchilov, Frank Hutter:

SGDR: Stochastic Gradient Descent with Warm Restarts. - Arvind Neelakantan, Quoc V. Le, Martín Abadi, Andrew McCallum, Dario Amodei:

Learning a Natural Language Interface with Neural Programmer. - Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons

, Jan Kautz:
Reinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU. - Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell:

Learning to Navigate in Complex Environments. - Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow:

DeepCoder: Learning to Write Programs. - Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft:

Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks. - Yacine Jernite, Edouard Grave, Armand Joulin, Tomás Mikolov:

Variable Computation in Recurrent Neural Networks. - Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy:

Deep Variational Information Bottleneck. - Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, Tomás Kociský:

The Neural Noisy Channel. - W. James Murdoch, Arthur Szlam:

Automatic Rule Extraction from Long Short Term Memory Networks. - Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston:

Dialogue Learning With Human-in-the-Loop. - Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martín Arjovsky, Olivier Mastropietro, Aaron C. Courville:

Adversarially Learned Inference. - Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston:

Learning through Dialogue Interactions by Asking Questions. - Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein:

Deep Information Propagation. - Gustav Larsson, Michael Maire, Gregory Shakhnarovich:

FractalNet: Ultra-Deep Neural Networks without Residuals. - David Lopez-Paz, Maxime Oquab:

Revisiting Classifier Two-Sample Tests. - Sahil Sharma, Aravind S. Lakshminarayanan, Balaraman Ravindran:

Learning to Repeat: Fine Grained Action Repetition for Deep Reinforcement Learning. - Lu Hou, Quanming Yao, James T. Kwok:

Loss-aware Binarization of Deep Networks. - Frank S. He, Yang Liu, Alexander G. Schwing, Jian Peng:

Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening. - Junbo Jake Zhao, Michaël Mathieu, Yann LeCun:

Energy-based Generative Adversarial Networks. - Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, Susanne Saminger-Platz:

Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning. - Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen:

Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights. - Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs

, Jennifer T. Chayes, Levent Sagun, Riccardo Zecchina:
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys. - Yongxin Yang, Timothy M. Hospedales:

Deep Multi-task Representation Learning: A Tensor Factorisation Approach. - Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Rémi Munos, Koray Kavukcuoglu, Nando de Freitas:

Sample Efficient Actor-Critic with Experience Replay. - Samuli Laine, Timo Aila:

Temporal Ensembling for Semi-Supervised Learning. - Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff:

On Detecting Adversarial Perturbations. - Jacob Goldberger, Ehud Ben-Reuven:

Training deep neural-networks using a noise adaptation layer. - Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, Wang Ling:

Learning to Compose Words into Sentences with Reinforcement Learning. - Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song:

Delving into Transferable Adversarial Examples and Black-box Attacks. - Moritz Hardt, Tengyu Ma:

Identity Matters in Deep Learning. - Jeff Donahue, Philipp Krähenbühl, Trevor Darrell:

Adversarial Feature Learning. - Yoojin Choi, Mostafa El-Khamy, Jungwon Lee:

Towards the Limit of Network Quantization. - Jongsoo Park, Sheng R. Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey:

Faster CNNs with Direct Sparse Convolutions and Guided Pruning. - Eric T. Nalisnick, Padhraic Smyth:

Stick-Breaking Variational Autoencoders. - Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow, David Carter:

Batch Policy Gradient Methods for Improving Neural Conversation Models. - Yingzhen Yang, Jiahui Yu, Pushmeet Kohli, Jianchao Yang, Thomas S. Huang:

Support Regularized Sparse Coding and Its Fast Encoder. - Hakan Inan, Khashayar Khosravi, Richard Socher:

Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. - Haizi Yu, Lav R. Varshney:

Towards Deep Interpretability (MUS-ROVER II): Learning Hierarchical Representations of Tonal Music. - Jason Tyler Rolfe:

Discrete Variational Autoencoders. - Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Özlem Aslan, Shengjie Wang, Abdelrahman Mohamed, Matthai Philipose, Matthew Richardson, Rich Caruana:

Do Deep Convolutional Nets Really Need to be Deep and Convolutional? - Jiaqi Mu, Suma Bhat, Pramod Viswanath:

Geometry of Polysemy. - Gautam Pai, Aaron Wetzler, Ron Kimmel:

Learning Invariant Representations Of Planar Curves. - Tsendsuren Munkhdalai, Hong Yu:

Reasoning with Memory Augmented Neural Networks for Language Comprehension. - Eyrun Eyjolfsdottir, Kristin Branson, Yisong Yue, Pietro Perona:

Learning Recurrent Representations for Hierarchical Behavior Modeling. - Alexey Kurakin, Ian J. Goodfellow, Samy Bengio:

Adversarial Machine Learning at Scale. - Jacek M. Bajor, Thomas A. Lasko:

Predicting Medications from Diagnostic Codes with Recurrent Neural Networks. - Loris Bazzani, Hugo Larochelle, Lorenzo Torresani:

Recurrent Mixture Density Network for Spatiotemporal Visual Attention. - Nadav Cohen, Amnon Shashua:

Inductive Bias of Deep Convolutional Networks through Pooling Geometry. - Ronen Basri, David W. Jacobs:

Efficient Representation of Low-Dimensional Manifolds using Deep Networks. - Thomas N. Kipf, Max Welling:

Semi-Supervised Classification with Graph Convolutional Networks. - Arash Ardakani, Carlo Condo, Warren J. Gross:

Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks. - Takeru Miyato, Andrew M. Dai, Ian J. Goodfellow:

Adversarial Training Methods for Semi-Supervised Text Classification. - Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, Yoav Goldberg:

Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. - Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher:

Pointer Sentinel Mixture Models. - Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron C. Courville, Yoshua Bengio:

An Actor-Critic Algorithm for Sequence Prediction. - Thomas Moreau, Joan Bruna:

Understanding Trainable Sparse Coding with Matrix Factorization. - Nicolas Le Roux:

Tighter bounds lead to improved classifiers. - Cezary Kaliszyk, François Chollet, Christian Szegedy:

HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving. - Shiyu Liang, R. Srikant:

Why Deep Neural Networks for Function Approximation? - Junyoung Chung, Sungjin Ahn, Yoshua Bengio:

Hierarchical Multiscale Recurrent Neural Networks. - Andrew Brock, Theodore Lim, James M. Ritchie, Nick Weston:

Neural Photo Editing with Introspective Adversarial Networks. - Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yaoliang Yu, Yuntian Deng, Eduard H. Hovy:

Dropout with Expectation-linear Regularization. - David Ha, Andrew M. Dai, Quoc V. Le:

HyperNetworks. - Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur:

A Learned Representation For Artistic Style. - Jin-Hwa Kim, Kyoung Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang:

Hadamard Product for Low-rank Bilinear Pooling.

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














