
Illustrated Guide to Transformers Neural Network: A step by step explanation
Transformers are the rage nowadays, but how do they work? This video demystifies the novel neural network architecture with step by step explanation and illustrations on how transformers work. CORRECTIONS: The sine and cosine functions are actually applied to the embedding dimensions and time steps!
Read More
TransGAN: Two Transformers Can Make One Strong GAN
- Frank
- February 19, 2021
- AI
- artificial intelligence
- Arxiv
- attention is all you need
- attention mechanism
- attention neural networks
- Deep Learning
- deep learning explained
- generative adversarial network
- local attention
- Machine Learning
- machine learning explained
- multihead attention
- Neural Networks
- paper explained
- pixelshuffle
- self attention
- superresolution
- transformer gan
- transformer gans
- transformer generative adversarial network
- transformer generator
- transgan
- vision transformer
Generative Adversarial Networks (GANs) hold the state-of-the-art when it comes to image generation. However, while the rest of computer vision is slowly taken over by transformers or other attention-based architectures, all working GANs to date contain some form of convolutional layers. This paper changes that and builds TransGAN, the first GAN where both the generator […]
Read More
Transformers for Image Recognition at Scale
- Frank
- October 6, 2020
- AI
- andrej karpathy
- anonymous
- artificial intelligence
- Arxiv
- attention is all you need
- attention mechanism
- beyer
- big transfer
- bit
- CNN
- Convolutional Neural Network
- Data Science
- Deep Learning
- explained
- Google Brain
- google research
- iclr
- iclr 2021
- karpathy
- Machine Learning
- Neural Networks
- Paper
- peer review
- review
- TPU
- tpu v3
- transformer
- transformer computer vision
- transformer images
- under submission
- vaswani
- vision transformer
- visual transformer
- vit
Yannic Kilcher explains why transformers are ruining convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works […]
Read More