Paper

AI Natural Language Processing

[ML News] New ImageNet SOTA | Uber’s H3 hexagonal coordinate system | New text-image-pair dataset

Yannic provides the latest news in machine learning in this video. Time Stamps: 0:00 – Intro 0:20 – TruthfulQA benchmark shines new light on GPT-3 2:00 – LAION-400M image-text-pair dataset 4:10 – GoogleAI’s EfficientNetV2 and CoAtNet 6:15 – Uber’s H3: A hexagonal coordinate system 7:40 – AWS NeurIPS 2021 DeepRacer Challenge 8:15 – Helpful Libraries […]

Read More
AI Interesting Research

PonderNet: Learning to Ponder

Humans don’t spend the same amount of mental effort on all problems equally. Instead, we respond quickly to easy tasks, and we take our time to deliberate hard tasks. DeepMind’s PonderNet attempts to achieve the same by dynamically deciding how many computation steps to allocate to any single input sample. This is done via a […]

Read More
Apple Google Privacy

What Your Phone Sends Every 5 Minutes to Apple or Google

Apple’s push into privacy may be mostly talk as a new privacy study analyzed which data smartphones transmit to Apple and Google, even with telemetry options disabled. It turns out, iPhones and Androids not only send a number of identifiers on average every 5 minutes, some network connections even include location and nearby devices. Professor […]

Read More
AI Research

Why AI is Harder Than We Think

Yannic Kilcher  explains how the AI community has gone through regular cycles of AI Springs, where rapid progress gave rise to massive overconfidence, high funding, and overpromise, followed by these promises being unfulfilled, subsequently diving into periods of disenfranchisement and underfunding, called AI Winters. This video he explores a paper which examines the reasons for […]

Read More
AI Deep Learning

Deep Networks Are Kernel Machines

Yannic Kilcher explains the paper “Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” Deep Neural Networks are often said to discover useful representations of the data. However, this paper challenges this prevailing view and suggest that rather than representing the data, deep neural networks store superpositions of the training data in their […]

Read More
AI Research

SingularityNET – A Decentralized, Open Market and Network for AIs

Yannic Kilcher explains this white paper on SingularityNET. Big Tech is currently dominating the pursuit of ever more capable AI. This happens behind closed doors and results in a monopoly of power. SingularityNET is an open, decentralized network where anyone can offer and consume AI services, and where AI agents can interlink with each other […]

Read More
AI Natural Language Processing

Transformers for Image Recognition at Scale

Yannic Kilcher explains why transformers are ruining convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works […]

Read More
AI Research

Explaining the Paper: Hopfield Networks is All You Need

Yannic Kilcher explains the paper “Hopfield Networks is All You Need.” Hopfield Networks are one of the classic models of biological memory networks. This paper generalizes modern Hopfield Networks to continuous states and shows that the corresponding update rule is equal to the attention mechanism used in modern Transformers. It further analyzes a pre-trained BERT […]

Read More
AI Machine Learning

Object-Centric Learning with Slot Attention

Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the pictures they look at. Yannic Kilcher explore a paper on object-centric learning. By imposing an objectness prior, this paper a module that is able to recognize permutation-invariant sets of objects from pixels in […]

Read More