Siraj Raval promises to teach you Deep Learning in 6 weeks. Given his track record, it’s safe to say he can pull it off.
All neural networks use activation functions, but the reasons behind using them are never clear!
In this video, the great Siraj Raval discusses what activation functions are, when they should be used, and what the difference between them is.
In this episode of the AI Show, Micheleen Harris dives into when and why one would use deep learning over classical machine learning.
While many tasks can be performed cheaply and well with classical machine learning and packages like scikit-learn, every once in a while, a task is better suited for a neural network architecture implemented with deep learning methods – e.g. large amounts of data or insufficient accuracy with other methods. Watch to find out more and hear about some Python packages to make life easier.
Sam Witteveen, Machine Learning Developer Expert Google, speaks at a recent conference in Singapore about the strategies on how to get good at deep learning rapidly.
Pieter Abbeel presents Learning to Learn for Robotic Control at the Neural Information Processing Systems Conference last December.
I’ve blogged about the technology behind Deep Fakes before, but here’s a look at the technology from BBC Click.
Lex Fridman’s lecture 3 of course 6.S094: Deep Learning for Self-Driving Cars (2018 version).
The class is free and open to everyone. It is an introduction to the practice of deep learning through the applied theme of building a self-driving car.
In this episode of the AI show Erika explains how to create deep learning models with music as the input. She begins by describing the problem of generating music by specifically describing how she generated the appropriate features from a midi file. She then describes the deep learning model she used in order to generate music.
Autoencoders are a type of neural network that reconstructs the input data its given.
Watch this explainer video by Siraj Raval as he explains how autoencoders work.