國慶佳節(jié)第一天,舉國同慶!出門旅游想必到處都是人山人海,不如在家里看看論文也是極好的!近日,機(jī)器學(xué)習(xí)頂會之一的ICLR2019投稿剛剛截止,本次大會投稿論文采用匿名公開的方式。本文整理了目前在國外社交網(wǎng)絡(luò)和知乎上討論熱烈的一些論文,一起來看看!
首先我們來看看ICLR 2018,也就是去年的提交論文題目分布情況。如下圖所示。熱門關(guān)鍵詞:強(qiáng)化學(xué)習(xí)、GAN、RIP等。
上圖為ICLR 2019提交論文的分布情況,熱門關(guān)鍵詞:強(qiáng)化學(xué)習(xí)、GAN、元學(xué)習(xí)等等。可以看出比去年還是有些變化的。
投稿論文地址:
https://openreview.net/group?id=ICLR.cc/2019/Conference
在GoogleColaboratory上可以找到關(guān)于ICLR 2019提交論文話題之間更加直觀的可視化圖。我們選擇了上圖中排名第三的話題“GAN”,圖中由紅色表示??梢钥闯觯琶谌腉AN與表中多個話題有交集,如training、state、graph等。
討論熱度最高的論文TOP 5
1. LARGE SCALE GAN TRAINING FOR HIGH FIDELITY NATURAL IMAGE SYNTHESIS
最強(qiáng)GAN圖像生成器,真假難辨
論文地址:
https://openreview.net/pdf?id=B1xsqj09Fm
更多樣本地址:
https://drive.google.com/drive/folders/1lWC6XEPD0LT5KUnPXeve_kWeY-FxH002
第一篇就是這篇最佳BigGAN,DeepMind負(fù)責(zé)星際項目的Oriol Vinyals,說這篇論文帶來了史上最佳的GAN生成圖片,提升Inception Score 100分以上。
論文摘要:
盡管近期由于生成圖像建模的研究進(jìn)展,從復(fù)雜數(shù)據(jù)集例如 ImageNet 中生成高分辨率、多樣性的樣本仍然是很大的挑戰(zhàn)。為此,研究者嘗試在最大規(guī)模的數(shù)據(jù)集中訓(xùn)練生成對抗網(wǎng)絡(luò),并研究在這種規(guī)模的訓(xùn)練下的不穩(wěn)定性。研究者發(fā)現(xiàn)應(yīng)用垂直正則化(orthogonal regularization)到生成器可以使其服從簡單的「截斷技巧」(truncation trick),從而允許通過截斷隱空間來精調(diào)樣本保真度和多樣性的權(quán)衡。這種修改方法可以讓模型在類條件的圖像合成中達(dá)到當(dāng)前最佳性能。當(dāng)在 128x128 分辨率的 ImageNet 上訓(xùn)練時,本文提出的模型—BigGAN—可以達(dá)到 166.3 的 Inception 分?jǐn)?shù)(IS),以及 9.6 的 Frechet Inception 距離(FID),而之前的最佳 IS 和 FID 僅為 52.52 和 18.65。
BigGAN的生成器架構(gòu)
生成樣例,真是惟妙惟肖
2.Recurrent Experience Replay in Distributed Reinforcement Learning
分布式強(qiáng)化學(xué)習(xí)中的循環(huán)經(jīng)驗池
論文地址:
https://openreview.net/pdf?id=r1lyTjAqYX
Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from experience replay. We investigate the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN, triples the previous state of the art on Atari-57, and surpasses the state of the art on DMLab-30. R2D2 is the first agent to exceed human-level performance in 52 of the 57 Atari games.
3.Shallow Learning For Deep Networks
深度神經(jīng)網(wǎng)絡(luò)的淺層學(xué)習(xí)
論文地址:
https://openreview.net/forum?id=r1Gsk3R9Fm
淺層監(jiān)督的一層隱藏層神經(jīng)網(wǎng)絡(luò)具有許多有利的特性,使它們比深層對應(yīng)物更容易解釋,分析和優(yōu)化,但缺乏表示能力。在這里,我們使用1-hiddenlayer學(xué)習(xí)問題逐層順序構(gòu)建深層網(wǎng)絡(luò),這可以從淺層網(wǎng)絡(luò)繼承屬性。與之前使用淺網(wǎng)絡(luò)的方法相反,我們關(guān)注的是深度學(xué)習(xí)被認(rèn)為對成功至關(guān)重要的問題。因此,我們研究了兩個大規(guī)模圖像識別任務(wù)的CNN:ImageNet和CIFAR-10。使用一組簡單的架構(gòu)和訓(xùn)練想法,我們發(fā)現(xiàn)解決序列1隱藏層輔助問題導(dǎo)致CNN超過ImageNet上的AlexNet性能。通過解決2層和3層隱藏層輔助問題來擴(kuò)展ourtraining方法以構(gòu)建單個層,我們獲得了一個11層網(wǎng)絡(luò),超過ImageNet上的VGG-11,獲得了89.8%的前5個單一作物。據(jù)我們所知,這是CNN的端到端培訓(xùn)的第一個競爭性替代方案,可以擴(kuò)展到ImageNet。我們進(jìn)行了廣泛的實驗來研究它在中間層上引起的性質(zhì)。
4.Relational Graph Attention Networks
關(guān)聯(lián)性圖注意力網(wǎng)絡(luò)
論文地址:
https://openreview.net/forum?id=Bklzkh0qFm¬eId=HJxMHja3Y7
論文摘要:
In this paper we present Relational Graph Attention Networks, an extension of Graph Attention Networks to incorporate both node features and relational information into a masked attention mechanism, extending graph-based attention methods to a wider variety of problems, specifically, predicting the properties of molecules. We demonstrate that our attention mechanism gives competitive results on a molecular toxicity classification task (Tox21), enhancing the performance of its spectral-based convolutional equivalent. We also investigate the model on a series of transductive knowledge base completion tasks, where its performance is noticeably weaker. We provide insights as to why this may be, and suggest when it is appropriate to incorporate an attention layer into a graph architecture.
5.A Solution to China Competitive Poker Using Deep Learning
斗地主深度學(xué)習(xí)算法
論文地址:
https://openreview.net/forum?id=rJzoujRct7
論文摘要:
Recently, deep neural networks have achieved superhuman performance in various games such as Go, chess and Shogi. Compared to Go, China Competitive Poker, also known as Dou dizhu, is a type of imperfect information game, including hidden information, randomness, multi-agent cooperation and competition. It has become widespread and is now a national game in China. We introduce an approach to play China Competitive Poker using Convolutional Neural Network (CNN) to predict actions. This network is trained by supervised learning from human game records. Without any search, the network already beats the best AI program by a large margin, and also beats the best human amateur players in duplicate mode.
其他有意思的論文:
ICLR 2019 有什么值得關(guān)注的亮點?- 周博磊的回答 - 知乎
https://www.zhihu.com/question/296404213/answer/500575759
問句開頭式:
Are adversarial examples inevitable?
Transfer Value or Policy? A Value-centric Framework Towards Transferrable Continuous Reinforcement Learning
How Important is a Neuron?
How Powerful are Graph Neural Networks?
Do Language Models Have Common Sense?
Is Wasserstein all you need?
哲理警句式:
Learning From the Experience of Others: Approximate Empirical Bayes in Neural Networks
In Your Pace: Learning the Right Example at the Right Time
Learning what you can do before doing anything
Like What You Like: Knowledge Distill via Neuron Selectivity Transfer
Don’s Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors
抖機(jī)靈式:
Look Ma, No GANs! Image Transformation with ModifAE
No Pressure! Addressing Problem of Local Minima in Manifold Learning
Backplay: 'Man muss immer umkehren'
Talk The Walk: Navigating Grids in New York City through Grounded Dialogue
Fatty and Skinny: A Joint Training Method of Watermark
A bird's eye view on coherence, and a worm's eye view on cohesion
Beyond Winning and Losing: Modeling Human Motivations and Behaviors with Vector-valued Inverse Reinforcement Learning
一句總結(jié)式:
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.
-
圖像
+關(guān)注
關(guān)注
2文章
1080瀏覽量
40382 -
GaN
+關(guān)注
關(guān)注
19文章
1910瀏覽量
72779 -
深度學(xué)習(xí)
+關(guān)注
關(guān)注
73文章
5471瀏覽量
120904
原文標(biāo)題:ICLR 2019熱議論文Top 5:BigGAN、斗地主深度學(xué)習(xí)算法等
文章出處:【微信號:AI_era,微信公眾號:新智元】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。
發(fā)布評論請先 登錄
相關(guān)推薦
評論