This is a continuously updated repository that documents personal journey on learning data science, machine learning related topics.
- Goal: Introduce machine learning contents in Jupyter Notebook format. The content aims to strike a good balance between mathematical notations, educational implementation from scratch (using Python's scientific stack including numpy, scipy, pandas, matplotlib etc) and open-source library usage (scikit-learn, pyspark, gensim, keras, tensorflow).
- Short Note: Within each section, documentations are listed in reverse chronological order of the start date (the date when the first notebook in that folder was created, if the notebook happened to be updated, then the actual date will be at the top of each notebook). Each of them are independent of one another unless specified.
End to end project including data preprocessing, model building.
- Kaggle challenge: Predict if a car purchased at auction is a unfortunate purchase. [folder]
A/B testing, a.k.a experimental design. Includes 1) quick review of necessary statistic concepts. 2) methods and workflow/thought-process for conducting the test. 3) caveats to look out for.
- Frequentist A/B testing (includes a quick review of concepts such as p-value, confidence interval). [nbviewer]
Methods for selecting, evaluating models/algorithms.
- K-fold cross validation, grid/random search from scratch. [nbviewer]
- AUC (Area under the ROC, precision/recall curve) from scratch (includes building a custom scikit-learn transformer). [nbviewer]
Note that the following notebook is not a tutorial on the basics of spark, it assumes you're already somewhat familar with it or can pick it up quickly by checking documentations along the way. For those interested there's also a pyspark rdd cheatsheet and pyspark dataframe cheatsheet that may come in handy.
- Pyspark installation on Mac. [markdown]
- Examples of manipulating with data (crimes data) and building a RandomForest model with Spark. [nbviewer]
- PCA with Spark's ML. [nbviewer]
Dimensionality reduction methods.
- Principal Component Analysis (PCA) from scratch. [nbviewer]
- Introduction to Singular Value Decomposition (SVD), also known as Latent Semantic Analysis/Indexing (LSA/LSI). [nbviewer]
Recommendation System. Starters into the field should go through the first notebook to understand the basics of matrix factorization methods. Then the second notebook - "ALS-WR for implicit feedback data" and third notebook - "Bayesian Personalized Ranking" can be independent of one another as they are simply different algorithms. Although it's still ideal to go through them in the listed sequence.
- Alternating Least Squares with Weighted Regularization (ALS-WR) from scratch. [nbviewer]
- ALS-WR for implicit feedback data from scratch & mean average precision at k (mapk) and normalized cumulative discounted gain (ndcg) evaluation. [nbviewer]
- Bayesian Personalized Ranking (BPR) from scratch & AUC evaluation. [nbviewer]
Tree-based models for both regression and classification tasks.
- Decision Tree from scratch. [nbviewer]
- Random Forest from scratch and Extra Trees. [nbviewer]
- Gradient Boosting Machine (GBM) from scratch. [nbviewer]
- Xgboost API walkthrough (includes hyperparmeter tuning via scikit-learn like API). [nbviewer]
Also known as market-basket analysis.
TF-IDF and Topic Modeling are techniques specifically used for text analytics.
- TF-IDF (text frequency - inverse document frequency) from scratch. [nbviewer]
- K-means, K-means++ from scratch; Elbow method for choosing K. [nbviewer]
- Gaussian Mixture Model from scratch; AIC and BIC for choosing the number of Gaussians. [nbviewer]
- Topic Modeling with gensim's Latent Dirichlet Allocation(LDA). [nbviewer]
Best practices for doing data science in Python.
- View [nbviewer]
Curated notes on deep learning. Tensorflow is used to implement some of the models.
- Softmax Regression from scratch. [nbviewer]
- Softmax Regression using Tensorflow (Tensorflow hello world). [nbviewer]
- Multi-layers Neural Network using Tensorflow. [nbviewer]
- Convolutional Neural Network using Tensorflow. [nbviewer]
- Word2vec (skipgram + negative sampling) using Gensim (includes text preprocessing with spaCy). [nbviewer]
Walking through keras, a high-level deep learning library. Note that this is only a API walkthrough, not a tutorial on the details of deep learning. For those interested there's also a keras cheatsheet that may come in handy.
- Multi-layers Neural Network (keras basics). [nbviewer]
- Multi-layers Neural Network hyperparameter tuning via scikit-learn like API. [nbviewer]
- Convolutional Neural Network (image classification). [nbviewer]
Naive Bayes and Logistic Regression for text classification.
- Building intuition with spam classification using scikit-learn (scikit-learn hello world). [nbviewer]
- Bernoulli and Multinomial Naive Bayes from scratch. [nbviewer]
- Logistic Regression (stochastic gradient descent) from scratch. [nbviewer]
- Chi-square feature selection. [nbviewer]
PyCon 2016: Practical Network Analysis Made Simple. Quickstart to networkx's API. Includes some basic graph plotting and algorithms.
- View [nbviewer]
Building intuition on Ridge and Lasso regularization using scikit-learn.
- View [nbviewer]
Genetic Algorithm. Math-free explanation and code from scratch.
- Start from a simple optimization problem and extending it to traveling salesman problem (tsp).
- View [nbviewer]
Choosing the optimal cutoff value for logistic regression using cost-sensitive mistakes (meaning when the cost of misclassification might differ between the two classes) when your dataset consists of unbalanced binary classes. e.g. Majority of the data points in the dataset have a positive outcome, while few have negative, or vice versa. The notion can be extended to any other classification algorithm that can predict class’s probability, this documentation just uses logistic regression for illustration purpose.
- Visualize two by two standard confusion matrix and ROC curve with costs using ggplot2.
- View [Rmarkdown]
A collection of scattered old clustering documents in R.
- 2015.12.08 | Toy sample code of the LDA algorithm (gibbs sampling) and the topicmodels library. [Rmarkdown]
- 2015.11.19 | k-shingle, Minhash and Locality Sensitive Hashing for solving the problem of finding textually similar documents. [Rmarkdown]
- 2015.11.17 | Introducing tf-idf (term frequency-inverse document frequency), a text mining technique. Also uses it to perform text clustering via hierarchical clustering. [Rmarkdown]
- 2015.11.06 | Some useful evaluations when working with hierarchical clustering and K-means clustering (K-means++ is used here). Including Calinski-Harabasz index for determine the right K (cluster number) for clustering and boostrap evaluation of the clustering result’s stability. [Rmarkdown]
Training Linear Regression with gradient descent in R.
- Briefly covers the interpretation and visualization of linear regression's summary output.
- View [Rmarkdown]
- 2017.08.23 | Understanding iterables, iterator and generators. [nbviewer]
- 2017.07.12 | Cohort analysis. Visualizing user retention by cohort with seaborn's heatmap and illustrating pandas's unstack. [nbviewer]
- 2017.03.16 | Logging module. [nbviewer]
- 2016.12.26 | Data structure, algorithms from scratch. [folder]
- 2016.12.22 | Cython and Numba quickstart for high performance python. [nbviewer]
- 2016.06.22 | Optimizing Pandas (e.g. reduce memory usage using category type). [nbviewer]
- 2016.06.10 | Unittest. [Python script]
- 2016.04.26 | Using built-in data structure and algorithm. [nbviewer]
- 2016.04.26 | Tricks with strings and text. [nbviewer]
- 2016.04.17 | Python's decorators (useful script for logging and timing function). [nbviewer]
- 2016.03.18 | Pandas's pivot table. [nbviewer]
- 2016.03.02 | @classmethod, @staticmethod and @property. [nbviewer]