
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton’s Paper Explained)
- Frank
- March 3, 2021
- AI
- artificial intelligence
- Arxiv
- attention mechanism
- Capsule Networks
- capsule networks explained
- column
- Computer Vision
- consensus algorithm
- Deep Learning
- deep learning tutorial
- explained
- Geoff Hinton
- geoff hinton capsule networks
- geoff hinton neural networks
- Geoffrey Hinton
- geoffrey hinton deep learning
- geoffrey hinton glom
- glom model
- Google AI
- Google Brain
- hinton glom
- introduction to deep learning
- Machine Learning
- Neural Networks
- Schmidhuber
- transformer
Yannic Kilcher covers a paper where Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs. GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, […]
Read More
Transformers for Image Recognition at Scale
- Frank
- October 6, 2020
- AI
- andrej karpathy
- anonymous
- artificial intelligence
- Arxiv
- attention is all you need
- attention mechanism
- beyer
- big transfer
- bit
- CNN
- Convolutional Neural Network
- Data Science
- Deep Learning
- explained
- Google Brain
- google research
- iclr
- iclr 2021
- karpathy
- Machine Learning
- Neural Networks
- Paper
- peer review
- review
- TPU
- tpu v3
- transformer
- transformer computer vision
- transformer images
- under submission
- vaswani
- vision transformer
- visual transformer
- vit
Yannic Kilcher explains why transformers are ruining convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works […]
Read More
Shark or Baseball? Inside the ‘Black Box’ of a Neural Network
Generally speaking, Neural Networks are somewhat of a mystery. While you can understand the mechanics and the math that powers them, exactly how the network comes to its conclusions are a bit of a black box. Here’s an interesting story on how researchers are trying to peer into the mysteries of a neural net. Using […]
Read More
AI that Dresses Itself
Siraj Raval explores how a team of researchers at Google Brain and Georgia Tech developed an AI that learned how to dress itself using various types of clothing. They demonstrated their technology by presenting a video that shows an animated figure gracefully putting on clothing, and the most interesting part is that it learned how […]
Read More