您的位置:首頁>手機>正文

Ian Goodfellow撰文總結:穀歌的 ICLR 2017 碩果累累

雷鋒網消息,穀歌大腦團隊的 Ian Goodfellow 今日在研究院官網上撰文,總結了穀歌在 ICLR 2017 上所做的學術貢獻。雷鋒網編譯全文如下,未經許可不得轉載。

本周,第五屆國際學習表徵會議(ICLR 2017)在法國土倫召開,這是一個關注機器學習領域如何從資料中習得具有意義及有用表徵的會議。ICLR 包括 conference track 及 workshop track 兩個項目,邀請了獲得 oral 及 poster 的研究者們進行分享,涵蓋深度學習、度量學習、核學習、組合模型、非線性結構化預測,及非凸優化問題。

站在神經網路及深度學習領域浪潮之巔,穀歌關注理論與實踐,並致力於開發理解與總結的學習方法。作為 ICLR 2017 的白金贊助商,穀歌有超過 50 名研究者出席本次會議(大部分為穀歌大腦團隊及穀歌歐洲研究分部的成員),通過在現場展示論文及海報的方式,為建設一個更完善的學術研究交流平臺做出了貢獻,也是一個互相學習的過程。此外,谷歌的研究者也是 workshops 及組委會構建的中堅力量。

如果你來到 ICLR 2017,我們希望你能在我們的展臺前駐足,並與我們的研究者進行交流,探討如何為數十億人解決有趣的問題。

以下為穀歌在 ICLR 2017 展示的論文內容(其中的谷歌研究者已經加粗表示)

區域主席

George Dahl, Slav Petrov, Vikas Sindhwani

程式主席(雷鋒網此前已經做過相關介紹)

Hugo Larochelle, Tara Sainath

受邀演講論文

Understanding Deep Learning Requires Rethinking Generalization (Best Paper Award)

Chiyuan Zhang*,

S

amy Bengio, Moritz Hardt,

Benjamin Recht*, Oriol Vinyals

Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data (Best Paper Award)

Nicolas Papernot*,

Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic

Shixiang (Shane) Gu*, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner,

Sergey Levine

Neural Architecture Search with Reinforcement Learning

Barret Zoph, Quoc Le

Poster 論文

Adversarial Machine Learning at Scale

Alexey Kurakin, Ian J. Goodfellow†, Samy Bengio

Capacity and Trainability in Recurrent Neural Networks

Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo

Improving Policy Gradient by Exploring Under-Appreciated Rewards

Ofir Nachum, Mohammad Norouzi, Dale Schuurmans

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

Noam Shazeer, Azalia Mirhoseini,

Krzysztof Maziarz,

Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean

Unrolled Generative Adversarial Networks

Luke Metz,

Ben Poole*, David Pfau,

Jascha Sohl-Dickstein

Categorical Reparameterization with Gumbel-Softmax

Eric Jang

, Shixiang (Shane) Gu*, Ben Poole*

Decomposing Motion and Content for Natural Video Sequence Prediction

Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin,

Honglak Lee

Density Estimation Using Real NVP

Laurent Dinh*,

Jascha Sohl-Dickstein, Samy Bengio

Latent Sequence Decompositions

William Chan*, Yu Zhang*,

Quoc Le,

Navdeep Jaitly*

Learning a Natural Language Interface with Neural Programmer

Arvind Neelakantan*,

Quoc V. Le, Martín Abadi,

Andrew McCallum*, Dario Amodei*

Deep Information Propagation

Samuel Schoenholz, Justin Gilmer,

Surya Ganguli,

Jascha Sohl-Dickstein

Identity Matters in Deep Learning

Moritz Hardt,

Tengyu Ma

A Learned Representation For Artistic Style

Vincent Dumoulin*,

Jonathon Shlens, Manjunath Kudlur

Adversarial Training Methods for Semi-Supervised Text Classification

Takeru Miyato, Andrew M. Dai, Ian Goodfellow†

HyperNetworks

David Ha, Andrew Dai, Quoc V. Le

Learning to Remember Rare Events

Lukasz Kaiser, Ofir Nachum, Aurko Roy*, Samy Bengio

Workshop Track

Particle Value Functions

Chris J. Maddison,

Dieterich Lawson, George Tucker

, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh

Neural Combinatorial Optimization with Reinforcement Learning

Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio

Short and Deep: Sketching and Neural Networks

Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

Explaining the Learning Dynamics of Direct Feedback Alignment

Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein

Training a Subsampling Mechanism in Expectation

Colin Raffel, Dieterich Lawson

Tuning Recurrent Neural Networks with Reinforcement Learning

Natasha Jaques*, Shixiang (Shane) Gu*, Richard E. Turner,

Douglas Eck

REBAR: Low-Variance, Unbiased Gradient Estimates for Discrete Latent Variable Models

George Tucker

, Andriy Mnih, Chris J. Maddison,

Jascha Sohl-Dickstein

Adversarial Examples in the Physical World

Alexey Kurakin, Ian Goodfellow†, Samy Bengio

Regularizing Neural Networks by Penalizing Confident Output Distributions

Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton

Unsupervised Perceptual Rewards for Imitation Learning

Pierre Sermanet, Kelvin Xu, Sergey Levine

Changing Model Behavior at Test-time Using Reinforcement Learning

Augustus Odena, Dieterich Lawson, Christopher Olah

* 工作內容在谷歌就職時完成

† 工作內容在 OpenAI 任職時完成

詳細資訊可訪問 

research.googleblog

 瞭解,雷鋒網編譯。

喜欢就按个赞吧!!!
点击关闭提示