A Paper a Day

Reading one deep learning research paper everyday in December

A Paper a Day

Everyday in December I read and wrote an explanation of one deep learning research paper. I learned so much from reading a paper a day, and I had a ton of fun doing this!

Each week had a different topic

  • Week 1 - Computer Vision
  • Week 2 - NLP
  • Week 3 - Deep Generative Algorithms
  • Week 4 - Deep Reinforcement Learning

I've also published all my paper explanations for free!





Reading Roadmap

Reading order goes from left to right

Week 1 - Computer Vision
Week 2 - NLP
Week 3 - Generative Algorithms
Week 4 - Reinforcement Learning

Imagenet classification with deep convolutional neural networks

AlexNet - Deep Learning Breakthrough

A Krizhevsky, et al. (2012)

Deep residual learning for image recognition

ResNet, very deep networks

He, Kaiming, et al. (2015)

Rich feature hierarchies for accurate object detection and semantic segmentation

RCNN

Girshick, Ross, et al. (2014)

Mask R-CNN

Kaiming He, et al. (2017)

You only look once: Unified, real-time object detection

YOLO

Redmon, et al. (2015)

YOLO9000: Better, Faster, Stronger

YOLO v2

Redmon, Ali Farhadi (2016)

Building high-level features using large scale unsupervised learning

Unsupervised learning milestone - Google Brain Project

Quoc V. Le, et al. (2013)

Efficient Estimation of Word Representations in Vector Space

CBOW and Skip Grams

Tomas Mikolov, et al. (2013)


Distributed representations of words and phrases and their compositionality

Word2Vec

Tomas Mikolov, et al. (2013)

GloVe: Global Vectors for Word Representation

Jeffrey Pennington, et al. (2014)


Attention Is All You Need

Ashish Vaswani, et al. (2017)

Universal Language Model Fine-tuning for Text Classification

ULMFiT

Jeremy Howard, Sebastian Ruder (2018)


Deep contextualized word representations

ELMo

Matthew E. Peters, et al (2018)

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

BERT

Jacob Devlin, et al (2018)

Generative adversarial nets

GAN

Ian Goodfellow, et al. (2014)


Unsupervised representation learning with deep convolutional generative adversarial networks

DCGAN

A Radford, L Metz, and S Chintala. (2015)

Wasserstein GAN

WGAN

M Arjovsky, S Chintala, L Bottou (2017)


A neural algorithm of artistic style

Artistic style transfer

L.A. Gatys, A.S. Ecker, and M Bethge. (2015)

Colorful Image Colorization

Image colorization

R Zhang, P Isola, and A.A. Efros. (2016)

Auto-encoding variational bayes

VAE

D.P. Kingma, and M Welling. (2013)

DRAW: A recurrent neural network for image generation

VAE with attention

K Gregor, et al. (2015)

Playing atari with deep reinforcement learning

First paper named deep reinforcement learning

Volodymyr Mnih, et al. (2013)

Human-level control through deep reinforcement learning

Human-level control in atari games

Volodymyr Mnih, et al. (2015)

Mastering the game of Go with deep neural networks and tree search

AlphaGo

David Silver, et al. (2016)

Mastering the Game of Go without Human Knowledge

AlphaGo Zero

David Silver, et al. (2017)

Thinking Fast and Slow with Deep Learning and Tree Search

Hex

David Silver, et al. (2016)

Dueling Network Architectures for Deep Reinforcement Learning

Dueling DQN

Ziyu Wang, et al. (2015)

Deep Reinforcement Learning with Double Q-Learning

H Hasselt, et al. (2016)

Prioritized Experience Replay

T Schaul, et al. (2015)

Rainbow: Combining Improvements in Deep Reinforcement Learning

M Hessel, et al. (2017)

Asynchronous methods for deep reinforcement learning

A3C

Volodymyr Mnih, et al. (2016)

Trust Region Policy Optimization

TRPO

John Schulman, et al. (2015)

Proximal Policy Optimization Algorithms

PPO

John Schulman, et al. (2017)

High-Dimensional Continuous Control Using Generalized Advantage Estimation

GAE

John Schulman, et al. (2015)














How I Read Research Papers

I read every paper at least 3 times using Srinivasan Keshav’s method.

The First Pass

Get a bird’s-eye view of the paper
Takes 5 to 10 minutes

  • Read the title, abstract, and introduction
  • Read the section and subsection headings, but ignore everything else
  • Read the conclusion
  • Answer: how does this paper contribute to the field?

The Second Pass

Get an intuitive understanding of the paper
Takes up to 1 hour

  • Read the entire paper carefully, but gloss over proofs
  • Pay careful attention to illustrations
  • Make note of any concepts you don’t understand

The Third Pass

Fully understand the paper
Takes up to 2 hours

  • Look up any concepts you noted down in the second pass and get an intuitive understanding of them
  • Read and understand proofs
  • Create an outline of the paper from memory