explained

AI Natural Language Processing

[ML News] New ImageNet SOTA | Uber’s H3 hexagonal coordinate system | New text-image-pair dataset

Yannic provides the latest news in machine learning in this video. Time Stamps: 0:00 – Intro 0:20 – TruthfulQA benchmark shines new light on GPT-3 2:00 – LAION-400M image-text-pair dataset 4:10 – GoogleAI’s EfficientNetV2 and CoAtNet 6:15 – Uber’s H3: A hexagonal coordinate system 7:40 – AWS NeurIPS 2021 DeepRacer Challenge 8:15 – Helpful Libraries […]

Read More
Science

The Super Bizarre Quantum Eraser Experiment

The quantum eraser experiment is one of the weirdest phenomena that has ever been observed and will melt your mind.  It seems that quantum mechanics mixes past and future together. In this video, Fermilab’s Dr. Don take you through this quantum conundrum.

Read More
AI Interesting Research

PonderNet: Learning to Ponder

Humans don’t spend the same amount of mental effort on all problems equally. Instead, we respond quickly to easy tasks, and we take our time to deliberate hard tasks. DeepMind’s PonderNet attempts to achieve the same by dynamically deciding how many computation steps to allocate to any single input sample. This is done via a […]

Read More
AI Research

Why AI is Harder Than We Think

Yannic Kilcher  explains how the AI community has gone through regular cycles of AI Springs, where rapid progress gave rise to massive overconfidence, high funding, and overpromise, followed by these promises being unfulfilled, subsequently diving into periods of disenfranchisement and underfunding, called AI Winters. This video he explores a paper which examines the reasons for […]

Read More
AI Research

GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton’s Paper Explained)

Yannic Kilcher covers a paper where Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs. GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, […]

Read More
AI Deep Learning

Deep Networks Are Kernel Machines

Yannic Kilcher explains the paper “Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” Deep Neural Networks are often said to discover useful representations of the data. However, this paper challenges this prevailing view and suggest that rather than representing the data, deep neural networks store superpositions of the training data in their […]

Read More
AI Research

SingularityNET – A Decentralized, Open Market and Network for AIs

Yannic Kilcher explains this white paper on SingularityNET. Big Tech is currently dominating the pursuit of ever more capable AI. This happens behind closed doors and results in a monopoly of power. SingularityNET is an open, decentralized network where anyone can offer and consume AI services, and where AI agents can interlink with each other […]

Read More
AI Interesting Science

How to Wire a Computer Like a Human Brain

The goal of neuromorphic computing is simple: mimic the neural structure of the brain. Seeker checks out the current generation of computer chips that’s getting closer to reaching this non-trivial engineering task.

Read More
AI Natural Language Processing

Transformers for Image Recognition at Scale

Yannic Kilcher explains why transformers are ruining convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works […]

Read More