Neural Networks

AI Neural Networks Research

Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)

This video is from Yannic Kilcher. Retention is an alternative to Attention in Transformers that can both be written in a parallel and in a recurrent fashion. This means the architecture achieves training parallelism while maintaining low-cost inference. Experiments in the paper look very promising. Paper: https://arxiv.org/abs/2307.08621

Read More
AI Generative AI Large Language Models Research

Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution (Paper Explained)

This video is from Yannic Kilcher. Promptbreeder is a self-improving self-referential system for automated prompt engineering. Give it a task description and a dataset, and it will automatically come up with appropriate prompts for the task. This is achieved by an evolutionary algorithm where not only the prompts, but also the mutation-prompts are improved over […]

Read More
AI Deep Learning Large Language Models

Are Retentive Networks A Successor to Transformer for Large Language Models?

Retention is an alternative to Attention in Transformers that can both be written in a parallel and in a recurrent fashion. This means the architecture achieves training parallelism while maintaining low-cost inference. Experiments in the paper look very promising. Yannic Kilcher elaborates.

Read More
AI Large Language Models Research

Scaling Transformer to 1M tokens and beyond with RMT (Paper Explained)

Yannic Kilcher explains this paper that promises to scale transformers to 1 million tokens and beyond. We take a look at the technique behind it: The Recurrent Memory Transformer, and what its strengths and weaknesses are.

Read More
Future Hardware Research

Get the Inside Scoop on Neuromorphic Computing Part 1

Computer design has always been inspired by biology, especially the brain. In this episode of Architecture All Access – Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab – explains the relationship of Neuromorphic Computing and understanding the principals of brain computations at the circuit level that are enabling next-generation intelligent devices […]

Read More
Neural Networks

Code From Scratch: Neural Networks

Sapphire Dev shows us how neural networks operate behind the hood can be hidden from the programmer in frameworks such as TensorFlow. But to become a competent AI developer, understanding the fundamental algorithms behind these networks is invaluable information. Here, we embark on a journey to create our own network in the Dart programming language.

Read More
Natural Language Processing Research

LLaMA: Open and Efficient Foundation Language Models (Paper Explained)

Large Language Models (LLMs) are all the rage right now. ChatGPT is the LLM everyone talks about, but there are others. With the attention (and money) that OpenAI is getting, expect more of them. LLaMA is a series of large language models from 7B to 65B parameters, trained by Meta AI. They train for longer […]

Read More