Keys of Course Deep Learning Specialization-Andrew Ng Programming Assignments
The main reason of organizing PDF version of Programming Assignments Keys based the Course Deep Learning Specialization-Andrew Ng is that it is easy to learn the deep learning technicals.
This document can be freely used for learning and scientific research and is freely disseminated, but it must not be used for commercial purposes. Otherwise, the contributor is not responsible for the consequences.
Thanks for the great work done by Andrew Ng and his team.
1.1.1 About iPython Notebooks
1.1.2 Building basic functions with numpy
1.1.2.1 sigmoid function, np.exp()
1.1.2.2 Sigmoid gradient
1.1.2.3 Reshaping arrays
1.1.2.4 Normalizing rows
1.1.2.5 Broadcasting and the softmax function
1.1.3 Vectorization
1.1.3.1 Implement the L1 and L2 loss functions
1.2.1 Packages
1.2.2 Overview of the Problem set
1.2.3 General Architecture of the learning algorithm
1.2.4 Building the parts of our algorithm
1.2.4.1 Helper functions
1.2.4.2 Initializing parameters
1.2.4.3 Forward and Backward propagation
1.2.4.4 Optimization
1.2.5 Merge all functions into a model
1.2.6 Further analysis (optional/ungraded exercise)
1.2.7 Test with your own image (optional/ungraded exercise)
1.2.8 Code of Logistic Regression with a Neural Network
1.3 Planar data classification with a hidden layer
1.3.1 Packages
1.3.2 Dataset
1.3.3 Simple Logistic Regression
1.3.4 Neural Network model
1.3.4.1 Defining the neural network structure
1.3.4.2 Initialize the model’s parameters
1.3.4.3 The Loop
1.3.4.4 Integrate parts 1.3.4.1, 1.3.4.2 and 1.3.4.3 in nn_model()
1.3.4.5 Tuning hidden layer size (optional/ungraded exercise)
1.3.5 Code of Neural Network With a Hidden Layer
1.4.1 Packages
1.4.2 Outline of the Assignment
1.4.3 Initialization
1.4.3.1 2-layer Neural Network
1.4.3.2 L-layer Neural Network
1.4.4 Forward propagation module
1.4.4.1 Linear Forward
1.4.4.2 Linear-Activation Forward
1.4.4.3 L-Layer Model
1.4.5 Cost function
1.4.6 Backward propagation module
1.4.6.1 Linear backward
1.4.6.2 Linear-Activation backward
1.4.6.3 L-Model Backward
1.4.6.4 Update Parameters
1.4.6.5 Conclusion
1.4.7 Code of Deep Neural Network
1.5.1 Packages
1.5.2 Dataset
1.5.3 Architecture of your model
1.5.3.1 2-layer neural network
1.5.3.2 L-layer deep neural network
1.5.3.3 General methodology
1.5.4 Two-layer neural network
1.5.5 L-layer Neural Network
1.5.6 Results Analysis
1.5.7 Test with your own image (optional/ungraded exercise)
1.5.8 Code of Deep Neural Network for Image Classification: Application
2.1.1 Initialization
2.1.1.1 Package
2.1.1.2 Neural Network model
2.1.1.3 Zero initialization
2.1.1.4 Random initialization
2.1.1.5 He initialization
2.1.1.6 Conclusions
2.1.1.7 Code of initialization
2.1.2 Regularization
2.1.2.1 Non-regularized model
2.1.2.2 L2 Regularization
2.1.2.3 Dropout
2.1.2.4 Conclusions
2.1.3 Gradient Checking
2.1.3.1 How does gradient checking work?
2.1.3.2 1-dimensional gradient checking
2.1.3.3 N-dimensional gradient checking
2.2.1 Packages
2.2.2 Gradient Descent
2.2.3 Mini-Batch Gradient descent
2.2.4 Momentum
2.2.5 Adam
2.2.6 Model with different optimization algorithms
2.2.6.1 Mini-batch Gradient descent
2.2.6.2 Mini-batch gradient descent with momentum
2.2.6.3 Mini-batch with Adam mode
2.2.6.4 Summary
2.3.1 Exploring the Tensorflow Library
2.3.1.1 Linear function
2.3.1.2 Computing the sigmoid
2.3.1.3 Computing the Cost
2.3.1.4 Using One Hot encodings
2.3.1.5 Initialize with zeros and ones
2.3.2 Building your first neural network in tensorflow
2.3.2.1 Problem statement: SIGNS Dataset
2.3.2.2 Create placeholders
2.3.2.3 Initializing the parameters
2.3.2.4 Forward propagation in tensorflow
2.3.2.5 Compute cost
2.3.2.6 Backward propagation & parameter updates
2.3.2.7 Building the model
2.3.2.8 Test with your own image (optional / ungraded exercise)
2.3.2.9 Summary
3.1.1 Packages
3.1.2 Outline of the Assignment
3.1.3 Convolutional Neural Networks
3.1.3.1 Zero-Padding
3.1.3.2 Single step of convolution
3.1.3.3 Convolutional Neural Networks - Forward pass
3.1.4 Pooling layer
3.1.4.1 Forward Pooling
3.1.5 Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
3.1.5.1 Convolutional layer backward pass
3.1.5.2 Pooling layer - backward pass
3.2.1 TensorFlow model
3.2.2 Create placeholders
3.2.3 Initialize parameters
3.2.4 Forward propagation
3.2.5 Compute cost
3.2.6 Model
3.3.1 The Happy House
3.3.2 Building a model in Keras
3.3.3 Conclusion
3.3.4 Test with your own image (Optional)
3.3.5 Other useful functions in Keras (Optional)
3.4.1 The problem of very deep neural networks
3.4.2 Building a Residual Network
3.4.2.1 The identity block
3.4.2.2 The convolutional block
3.4.3 Building your first ResNet model (50 layers)
3.4.4 Test on your own image (Optional/Ungraded)
3.5.1 Problem Statement
3.5.2 YOLO
3.5.2.1 Model details
3.5.2.2 Filtering with a threshold on class scores
3.5.2.3 Non-max suppression
3.5.2.4 Wrapping up the filtering
3.5.3 Test YOLO pretrained model on images
3.5.3.1 Defining classes, anchors and image shape
3.5.3.2 Loading a pretrained model
3.5.3.3 Convert output of the model to usable bounding box tensors
3.5.3.4 Filtering boxes
3.5.3.5 Run the graph on an image
3.6.1 Naive Face Verification
3.6.2 Encoding face images into a 128-dimensional vector
3.6.2.1 Using an ConvNet to compute encodings
3.6.2.2 The Triplet Loss
3.6.3 Loading the trained model
3.6.4 Applying the model
3.6.4.1 Face Verification
3.6.4.2 Face Recognition
3.7.1 Problem Statement
3.7.2 Transfer Learning
3.7.3 Neural Style Transfer
3.7.3.1 Computing the content cost
3.7.3.2 Computing the style cost
3.7.3.3 Defining the total cost to optimize
3.7.4 Solving the optimization problem
3.7.5 Test with your own image (Optional/Ungraded)
3.7.6 Conclusion