Course details
This course continues where my first course, Deep Learning in Python, left off. You already know how to build an artificial neural network in Python, and you have a plug-and-play script that you can use for TensorFlow. Neural networks are one of the staples of machine learning, and they are always a top contender in Kaggle contests. If you want to improve your skills with neural networks and deep learning, this is the course for you.
You already learned about backpropagation (and because of that, this course contains basically NO MATH), but there were a lot of unanswered questions. How can you modify it to improve training speed? In this course you will learn about batch and stochastic gradient descent, two commonly used techniques that allow you to train on just a small sample of the data at each iteration, greatly speeding up training time.
You will also learn about momentum, which can be helpful for carrying you through local minima and prevent you from having to be too conservative with your learning rate. You will also learn about adaptive learning rate techniques like AdaGrad and RMSprop which can also help speed up your training.
Because you already know about the fundamentals of neural networks, we are going to talk about more modern techniques, like dropout regularization, which we will implement in both TensorFlow and Theano. The course is constantly being updated and more advanced regularization techniques are coming in the near future.
In my last course, I just wanted to give you a little sneak peak at TensorFlow. In this course we are going to start from the basics so you understand exactly what's going on - what are TensorFlow variables and expressions and how can you use these building blocks to create a neural network? We are also going to look at a library that's been around much longer and is very popular for deep learning - Theano. With this library we will also examine the basic building blocks - variables, expressions, and functions - so that you can build neural networks in Theano with confidence.
Because one of the main advantages of TensorFlow and Theano is the ability to use the GPU to speed up training, I will show you how to set up a GPU-instance on AWS and compare the speed of CPU vs GPU for training a deep neural network.
With all this extra speed, we are going to look at a real dataset - the famous MNIST dataset (images of handwritten digits) and compare against various known benchmarks.
This course focuses on "how to build and understand", not just "how to use". Anyone can learn to use an API in 15 minutes after reading some documentation. It's not about "remembering facts", it's about "seeing for yourself" via experimentation. It will teach you how to visualize what's happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.
NOTES:
All the code for this course can be downloaded from my github: /lazyprogrammer/machine_learning_examples
In the directory: ann_class2
Make sure you always "git pull" so you have the latest version!
HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:
- calculus
- linear algebra
- probability
- Python coding: if/else, loops, lists, dicts, sets
- Numpy coding: matrix and vector operations, loading a CSV file
TIPS (for getting through the course):
- Watch it at 2x.
- Take handwritten notes. This will drastically increase your ability to retain the information.
- Ask lots of questions on the discussion board. The more the better!
- Realize that most exercises will take you days or weeks to complete.
USEFUL COURSE ORDERING:
- (The Numpy Stack in Python)
- Linear Regression in Python
- Logistic Regression in Python
- (Supervised Machine Learning in Python)
- (Bayesian Machine Learning in Python: A/B Testing)
- Deep Learning in Python
- Practical Deep Learning in Theano and TensorFlow
- (Supervised Machine Learning in Python 2: Ensemble Methods)
- Convolutional Neural Networks in Python
- (Easy NLP)
- (Cluster Analysis and Unsupervised Machine Learning)
- Unsupervised Deep Learning
- (Hidden Markov Models)
- Recurrent Neural Networks in Python
- Natural Language Processing with Deep Learning in Python
- JavaScript Full stack web developer virtual internship Virtual Bootcamp + Internship at LaimoonAED 1,449Duration: Upto 30 Hours
- USD 24
USD 480Duration: Upto 11 Hours - ChatGPT Secrets Beginner ChatGPT Ninja 2023 Course LineUSD 24
USD 480Duration: Upto 3 Hours - USD 2,967Duration: 12 Weeks Live virtual classroom