Knowledge Distillation as Semiparametric Inference

Knowledge Distillation as Semiparametric Inference

Microsoft Research highlights this research topic on Knowledge Distillation. More accurate machine learning models often demand more computation and memory at test time, making them... Details
Why AI is Harder Than We Think

Why AI is Harder Than We Think

Yannic Kilcher  explains how the AI community has gone through regular cycles of AI Springs, where rapid progress gave rise to massive overconfidence, high funding,... Details
Sound Capture and Speech Enhancement for Communication and Distant Speech Recognition

Sound Capture and Speech Enhancement for Communication and Distant Speech Recognition

Microsoft Research discusses the general architecture of speech enhancement pipelines for the needs of hands-free telecommunication and distant speech recognition. The talk will discuss both... Details
FastNeRF: High-Fidelity Neural Rendering at 200FPS

FastNeRF: High-Fidelity Neural Rendering at 200FPS

Microsoft Research highlights recent work on Neural Radiance Fields (NeRF) showed how neural networks can be used to encode complex 3D environments that can be... Details
This AI Makes Beautiful Videos From Your Images

This AI Makes Beautiful Videos From Your Images

Two Minute Papers explores the paper "Animating Pictures with Eulerian Motion Fields" in this video. Details
Self-Tuning Networks: Amortizing the Hypergradient Computation for Hyperparameter Optimization

Self-Tuning Networks: Amortizing the Hypergradient Computation for Hyperparameter Optimization

Microsoft Research shares this amazing talk on  the optimization of many deep learning hyperparameters can be formulated as a bilevel optimization problem. While most black-box... Details
Directions in ML: Taking Advantage of Randomness in Expensive Optimization Problems

Directions in ML: Taking Advantage of Randomness in Expensive Optimization Problems

Optimization is at the heart of machine learning, and gradient computation is central to many optimization techniques. Stochastic optimization, in particular, has taken center stage... Details
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton’s Paper Explained)

GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton’s Paper Explained)

Yannic Kilcher covers a paper where Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders... Details
Mind Reading For Brain-To-Text Communication?!

Mind Reading For Brain-To-Text Communication?!

Two Minute Papers explores the paper "High-performance brain-to-text communication via imagined handwriting." As much as I’d like the idea of sending a text message without... Details
Automating ML Performance Metric Selection

Automating ML Performance Metric Selection

Microsoft Research hosts this talk on Automating ML Performance Metric Selection From music recommendations to high-stakes medical treatment selection, complex decision-making tasks are increasingly automated... Details