- Dropout
- LSTM
- MADE
- Multilayer Perceptron (MLP)
- Self Attention
- Transformer
- Variational Autoencoder (VAE)
I spent some time learning classical ML first since it was most relevant for my job. You can learn deep learning first without any other ML experience/knowledge.
-
started off with a homemade ML in 10 weeks course:
tl;dr, here's the course, using content primarily from Hands-On Machine Learning with Scikit-Learn and TensorFlow and Andrew Ng's Coursera course on ML:
- Chapter 2 End-to-End Machine Learning Project
- Chapter 3 Classification (precision/recall, multiclass)
- Text feature extraction (from sklearn docs)
- Chapter 4 Training Models (linear/logistic regression, regularization)
- Advice for Applying Machine Learning
- Chapter 5 SVMs (plus kernels)
- Chapter 6 Decision Trees (basics)
- Chapter 7 Ensemble Learning and Random Forests (xgboost, RandomForest)
- Chapter 8 Dimensionality Reduction (PCA, t-SNE, LDA)
- Machine Learning System Design (Google) Best Practices for ML Engineering
A group of friends and I worked through this content at a cadence of one meeting every other Wednesday starting late June 2018 wrapping up at the end of 2018.
- Neural Networks and Deep Learning by Michael Nielsen http://neuralnetworksanddeeplearning.com/index.html
- fast.ai
- Practical Deep Learning for Coders https://course.fast.ai/videos/?lesson=1
- Part 2: Deep Learning from the Foundations https://course.fast.ai/videos/?lesson=8
- distill is a good resource for topics. ex:
- Kyunghyun Cho's lecture notes on "Natural Language Processing with Representation Learning": https://github.com/nyu-dl/NLP_DL_Lecture_Note/blob/master/lecture_note.pdf
- Jacob Eisenstein's textbook on "Natural Language Processing" (https://github.com/jacobeisenstein/gt-nlp-class/blob/master/notes/eisenstein-nlp-notes.pdf)
It's easy to get intimated by the math in papers. I found that taking the time to relearn linear algebra and some calculus has had compounding returns!
- Matrix Calculus by Terence Parr and Jeremy Howard
- backprop chapter in Neural Networks and Deep Learning
- Matrix Algebra - Linear Algebra for Deep Learning
- 3blue1brown for practical and visual linear algebra
- for theoretical linear algebra: Finite Dimensional Vector Spaces
Once you've understood common concepts, the best way to keep up to date with research and continue learning beyond courses is by reading and reimplementing papers.
- track papers through Zotero or Mendeley. I started off using Zotero but switched Mendeley to share folders/papers in groups I was in
- twitter - follow 20+ practitioners/researchers you admire on twitter to find interesting papers
- ML subreddit
- AI/DL fb groups
- arXiv - there's 10-20 new papers on arXiv every day for AI/computational linguistics so you could just browse arXiv every day for the latest papers in the topics you're most interested in
- AI blogs
- your objective is to figure out quickly which papers NOT to read
- spend time in the conclusions
- try to answer the question
what is novel
? - create a reading group! Even just one other person can already save you 50% of the time.
Purpose: to break down deep learning concepts and architecture into code using PyTorch! It's easy import from libraries and never really understand what something is doing. This repo is to reimplement common architectures and atomic concepts in deep learning in simpler code.