The YouTube channel “Two Minute Papers” explores how neural networks can help artists be more creative.
In this video, Arxiv continues his dive into the world of adversarial examples: images specifically engineered to fool neural networks into making completely wrong decisions!
This is a continuation from my previous post.
Interpreting what neural networks are doing is a tricky problem. In fact, they are often referred to as a “black box.”
In this video Arxiv dives into the approach of feature visualization. From simple neuron excitation to the Deep Visualization Toolbox and the Google DeepDream project.
Watch to open up that black box!
While at NDC in Sydney, Carl and Richard talked to Joe Albahari about using LINQPad to create neural nets from scratch.
LINQPad is an interactive development environment for .NET – originally focused on helping you build LINQ expressions. But as Joe explains, it can be used for all sorts of interactive coding experiences – including learning to build neural networks. Joe talks through the fundamentals of neural nets and what it’s like to build neural nets yourself. Even if you move on to more advanced machine learning tooling, learning the fundamentals are useful!
Joe Albahari is an O’Reilly author and the inventor of LINQPad. He’s written seven books on C# and LINQ, including the upcoming “C# 7.0 in a Nutshell”. He speaks regularly at conferences and user groups, and has been a C# MVP for nine years running.
Press the play button below to listen here or visit the show page.
What is back propagation, you ask? Well, it’s our old friend gradient descent’s new name when it is applied to neural networks.
If that explanation doesn’t work for you, then check out this video, where Siraj Raval explains back propagation in a way only he can: in song. Best of all, aside from the sick beats, is that the source code from the video is available on GitHub.
A lot of times, research papers don’t have an associated codebase that you can browse and run yourself. In cases like that, you’ll have to code up the paper yourself. Very often, this is easier said than done.
In this video Siraj Raval shows you how you should read and dissect a research paper so you can quickly implement it.
Here is a primer to introduce the concepts of deep learning with a specific focus on computer vision. It covers concepts including CNN’s (Convolutional Neural Networks), deep learning and transfer learning. It was created as an introduction for people getting started with machine learning and specifically deep learning to explain some of the commonly used terms and introduce some of the popular approaches to solving computer vision challenges.