Hot Posts

6/recent/ticker-posts

Modern Deep Learning In Python

 

Build with modern libraries like Tensorflow, Theano, Keras, PyTorch, CNTK, MXNet. Train faster with GPU on AWS.

What you’ll learn


  • Apply momentum to backpropagation to train neural networks
  • Apply adaptive learning rate procedures like AdaGrad, RMSprop, and Adam to backpropagation to train neural networks

  • Understand the basic building blocks of Theano

  • Build a neural network in Theano
  • Understand the basic building blocks of TensorFlow
  • Build a neural network in TensorFlow
  • Build a neural network that performs well on the MNIST dataset
  • Understand the difference between full gradient descent, batch gradient descent, and stochastic gradient descent
  • Understand and implement dropout regularization in Theano and TensorFlow
  • Understand and implement batch normalization in Theano and Tensorflow
  • Write a neural network using Keras
  • Write a neural network using PyTorch
  • Write a neural network using CNTK
  • Write a neural network using MXNet
Requirements
  • Be comfortable with Python, Numpy, and Matplotlib. Install Theano and TensorFlow.
  • If you do not yet know about gradient descent, backprop, and softmax, take my earlier course, deep learning in Python, and then return to this course.

Description

This course continues where my first course, Deep Learning in Python, left off. You already know how to build an artificial neural network in Python, and you have a plug-and-play script that you can use for TensorFlow. Neural networks are one of the staples of machine learning, and they are always a top contender in Kaggle contests. If you want to improve your skills with neural networks and deep learning, this is the course for you.

You already learned about backpropagation, but there were a lot of unanswered questions. How can you modify it to improve training speed? In this course you will learn about batch and stochastic gradient descent, two commonly used techniques that allow you to train on just a small sample of the data at each iteration, greatly speeding up training time

You will also learn about momentum, which can be helpful for carrying you through local minima and prevent you from having to be too conservative with your learning rate. You will also learn about adaptive learning rate techniques like AdaGradRMSprop, and Adam which can also help speed up your training.

Because you already know about the fundamentals of neural networks, we are going to talk about more modern techniques, like dropout regularization and batch normalization, which we will implement in both TensorFlow and Theano. The course is constantly being updated and more advanced regularization techniques are coming in the near future.

Who is the target audience?
  • Students and professionals who want to deepen their machine learning knowledge
  • Data scientists who want to learn more about deep learning
  • Data scientists who already know about backpropagation and gradient descent and want to improve it with stochastic batch training, momentum, and adaptive learning rate procedures like RMSprop
  • Those who do not yet know about backpropagation or softmax should take my earlier course, deep learning in Python, first
                                                           Direct Download Now

Post a Comment

0 Comments