
AI
Natural Language Processing
AlphaCode Explained: AI Code Generation
- Frank
- February 15, 2022
- AI
- AI AlphaCode
- AI Codex
- Alpha Code
- AlphaCode
- AlphaCode AI
- artificial intelligence
- attention
- billion
- billion parameter
- codex
- codex ai
- Deep Mind
- DeepMind
- explained
- Explanation
- gopher explained
- Gopher Model
- GPT-2
- gpt-3
- gpt-4
- language modeling
- Large Language Model
- logic reasoning
- Machine Learning
- ML
- Natural Language Processing
- NLP
- OpenAI
- OpenAI Codex
- Retro
- self attention
- Self-Attention
- Self-Supervised Learning
- sota
- State of the Art
- Text Generation
- transformer
AlphaCode is DeepMind’s new massive language model for generating code. It is similar to OpenAI Codex, except for in the paper they provide a bit more analysis. The field of NLP within AI and ML has exploded get a lot more papers all the time. Hopefully this video can help you understand how AlphaCode works […]
Read More
AI
Natural Language Processing
GPT-3: Language Models are Few-Shot Learners
- Frank
- June 2, 2020
- AI
- artificial intelligence
- Arxiv
- attention
- autoregressive
- Bert
- boolq
- common crawl
- context
- corpus
- deep language
- Deep Learning
- explained
- Few Shot
- glue
- GPT-2
- gpt-3
- gpt2
- gpt3
- heads
- language model
- Machine Learning
- Math
- Microsoft
- mlm
- Natural Language Processing
- natural questions
- Neural Networks
- news
- NLP
- OpenAI
- Paper
- preplexity
- question answering
- sota
- strings
- superglue
- training data
- Transformers
- turing
- Wikipedia
- zero shot
How far can you go with ONLY language modeling? Can a large enough language model perform NLP task out of the box? OpenAI take on these and other questions by training a transformer that is an order of magnitude larger than anything that has ever been built before and the results are astounding. Yannic Kilcher […]
Read More