A Paper a Day

Reading one deep learning research paper everyday in December

A Paper a Day

Everyday in December I read and wrote an explanation of one deep learning research paper. I learned so much from reading a paper a day, and I had a ton of fun doing this!

Each week had a different topic

  • Week 1 - Computer Vision
  • Week 2 - NLP
  • Week 3 - Deep Generative Algorithms
  • Week 4 - Deep Reinforcement Learning

I've also published all my paper explanations for free!

Reading Roadmap

Reading order goes from left to right

Week 1 - Computer Vision
Week 2 - NLP
Week 3 - Generative Algorithms
Week 4 - Reinforcement Learning

Imagenet classification with deep convolutional neural networks

AlexNet - Deep Learning Breakthrough

A Krizhevsky, et al. (2012)

Deep residual learning for image recognition

ResNet, very deep networks

He, Kaiming, et al. (2015)

Rich feature hierarchies for accurate object detection and semantic segmentation


Girshick, Ross, et al. (2014)

Mask R-CNN

Kaiming He, et al. (2017)

You only look once: Unified, real-time object detection


Redmon, et al. (2015)

YOLO9000: Better, Faster, Stronger


Redmon, Ali Farhadi (2016)

Building high-level features using large scale unsupervised learning

Unsupervised learning milestone - Google Brain Project

Quoc V. Le, et al. (2013)

Efficient Estimation of Word Representations in Vector Space

CBOW and Skip Grams

Tomas Mikolov, et al. (2013)

Distributed representations of words and phrases and their compositionality


Tomas Mikolov, et al. (2013)

GloVe: Global Vectors for Word Representation

Jeffrey Pennington, et al. (2014)

Attention Is All You Need

Ashish Vaswani, et al. (2017)

Universal Language Model Fine-tuning for Text Classification


Jeremy Howard, Sebastian Ruder (2018)

Deep contextualized word representations


Matthew E. Peters, et al (2018)

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding


Jacob Devlin, et al (2018)

Generative adversarial nets


Ian Goodfellow, et al. (2014)

Unsupervised representation learning with deep convolutional generative adversarial networks


A Radford, L Metz, and S Chintala. (2015)

Wasserstein GAN


M Arjovsky, S Chintala, L Bottou (2017)

A neural algorithm of artistic style

Artistic style transfer

L.A. Gatys, A.S. Ecker, and M Bethge. (2015)

Colorful Image Colorization

Image colorization

R Zhang, P Isola, and A.A. Efros. (2016)

Auto-encoding variational bayes


D.P. Kingma, and M Welling. (2013)

DRAW: A recurrent neural network for image generation

VAE with attention

K Gregor, et al. (2015)

Playing atari with deep reinforcement learning

First paper named deep reinforcement learning

Volodymyr Mnih, et al. (2013)

Human-level control through deep reinforcement learning

Human-level control in atari games

Volodymyr Mnih, et al. (2015)

Mastering the game of Go with deep neural networks and tree search


David Silver, et al. (2016)

Mastering the Game of Go without Human Knowledge

AlphaGo Zero

David Silver, et al. (2017)

Thinking Fast and Slow with Deep Learning and Tree Search


David Silver, et al. (2016)

Dueling Network Architectures for Deep Reinforcement Learning

Dueling DQN

Ziyu Wang, et al. (2015)

Deep Reinforcement Learning with Double Q-Learning

H Hasselt, et al. (2016)

Prioritized Experience Replay

T Schaul, et al. (2015)

Rainbow: Combining Improvements in Deep Reinforcement Learning

M Hessel, et al. (2017)

Asynchronous methods for deep reinforcement learning


Volodymyr Mnih, et al. (2016)

Trust Region Policy Optimization


John Schulman, et al. (2015)

Proximal Policy Optimization Algorithms


John Schulman, et al. (2017)

High-Dimensional Continuous Control Using Generalized Advantage Estimation


John Schulman, et al. (2015)

How I Read Research Papers

I read every paper at least 3 times using Srinivasan Keshav’s method.

The First Pass

Get a bird’s-eye view of the paper
Takes 5 to 10 minutes

  • Read the title, abstract, and introduction
  • Read the section and subsection headings, but ignore everything else
  • Read the conclusion
  • Answer: how does this paper contribute to the field?

The Second Pass

Get an intuitive understanding of the paper
Takes up to 1 hour

  • Read the entire paper carefully, but gloss over proofs
  • Pay careful attention to illustrations
  • Make note of any concepts you don’t understand

The Third Pass

Fully understand the paper
Takes up to 2 hours

  • Look up any concepts you noted down in the second pass and get an intuitive understanding of them
  • Read and understand proofs
  • Create an outline of the paper from memory