Yannic Kilcher explains the paper “Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.”

Deep Neural Networks are often said to discover useful representations of the data. However, this paper challenges this prevailing view and suggest that rather than representing the data, deep neural networks store superpositions of the training data in their weights and act as kernel machines at inference time. This is a theoretical paper with a main theorem and an understandable proof and the result leads to many interesting implications for the field.

In this deeplizard episode, learn how to prepare and process our own custom data set of sign language digits, which will be used to train our fine-tuned MobileNet model in a future episode.

VIDEO SECTIONS

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:40 Obtain the Data
  • 01:30 Organize the Data
  • 09:42 Process the Data
  • 13:11 Collective Intelligence and the DEEPLIZARD HIVEMIND

deeplizard  introduces MobileNets, a class of light weight deep convolutional neural networks that are vastly smaller in size and faster in performance than many other popular models.

VIDEO SECTIONS

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:17 Intro to MobileNets
  • 02:56 Accessing MobileNet with Keras
  • 07:25 Getting Predictions from MobileNet
  • 13:32 Collective Intelligence and the DEEPLIZARD HIVEMIND

In this video, Mandy from deeplizard  demonstrates how to use the fine-tuned VGG16 Keras model that we trained in the last episode to predict on images of cats and dogs in our test set.

Index:

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:17 Predict with a Fine-tuned Model
  • 05:40 Plot Predictions With A Confusion Matrix
  • 05:16 Collective Intelligence and the DEEPLIZARD HIVEMIND