Schmidhuber

AI Natural Language Processing

[ML News] New ImageNet SOTA | Uber’s H3 hexagonal coordinate system | New text-image-pair dataset

Yannic provides the latest news in machine learning in this video. Time Stamps: 0:00 – Intro 0:20 – TruthfulQA benchmark shines new light on GPT-3 2:00 – LAION-400M image-text-pair dataset 4:10 – GoogleAI’s EfficientNetV2 and CoAtNet 6:15 – Uber’s H3: A hexagonal coordinate system 7:40 – AWS NeurIPS 2021 DeepRacer Challenge 8:15 – Helpful Libraries […]

Read More
AI Research

GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton’s Paper Explained)

Yannic Kilcher covers a paper where Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs. GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, […]

Read More
AI Research

Explaining the Paper: Hopfield Networks is All You Need

Yannic Kilcher explains the paper “Hopfield Networks is All You Need.” Hopfield Networks are one of the classic models of biological memory networks. This paper generalizes modern Hopfield Networks to continuous states and shows that the corresponding update rule is equal to the attention mechanism used in modern Transformers. It further analyzes a pre-trained BERT […]

Read More
AI

Marcus Hutter on Universal Artificial Intelligence, AIXI, and AGI

Lex Fridman interviews Marcus Hutter ,a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with JĂĽrgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is […]

Read More