AI

Red Hat

In the Clouds (E34) | Wassup with WASM ft. Nigel Poulton

On this special episode of ‘In the Clouds’, Stu Miniman has a candid conversation with long-time collaborator and fellow tech-head Nigel Poulton – speaker, author and content-creator for all things WebAssembly (Wasm) and Kubernetes. Find out more about his work at https://nigelpoulton.com/ and do not miss his updated version of the renowned book ‘The Kubernetes […]

Read More
AI MLOps Red Hat

AI 101: What’s the difference between DevOps and MLOps?

Prasanth Anbalagan (Senior Principal Technical Marketing Manager for AI) will share the differences and similarities between DevOps and MLOps.

Read More
AI Generative AI Large Language Models

How To Run Llama 3 8B, 70B Models On Your Laptop (Free)

Unlock the power of AI right from your laptop with this comprehensive tutorial on how to set up and run Meta’s latest LLaMA models (8B and 70B versions). Written guide.

Read More
AI Ethics Generative AI

Microsoft’s New REALTIME AI Face Animator – Make Anyone Say Anything

This video from AI Search highlights Microsoft’s new VASA-1 model.

Read More
Drones

Vortex Cannon vs Drone

Mark Rober is back at it with another video this time on shooting down drones with a cannon. Yes, you read that right.

Read More
AI Generative AI Large Language Models Startups

Mistral Raising at $5B Just 4 Months After Raising at $2B

Mistral, renowned for its open-source LLM contributions, is set to raise funds at a $5 billion valuation, a substantial increase from the $2 billion valuation just four months ago. This rapid valuation growth follows Mistral’s strategic partnership with Microsoft, leveraging non-open-source models through Azure to hint at substantial revenue paths. The company also recently released […]

Read More
AI Research

Bulk Reading Abstracts of New AI Papers – April 12, 2024

This video from Tunadorable where he reads the summaries of the papers I chose here on his weekly newsletter.

Read More
AI Research

RAFT: Adapting Language Model to Domain Specific RAG Paper Discussion

This video is from Rithesh Sreenivasan. “Retrieval Augmented Fine Tuning (RAFT), a training recipe that improves the model’s ability to answer questions in an “openbook” in-domain setting. In RAFT, given a question, and a set of retrieved documents, we train the model to ignore those documents that don’t help in answering the question, which we […]

Read More
AI Generative AI Large Language Models Natural Language Processing

How to Create Dataset Locally with Retrieval Aware Fine-tuning (RAFT)

This video from Fahd Mirza introduces RAFT which is a recipe to adapting LLMs to domain-specific RAG. Retrieval Aware Fine-tuning (RAFT) trains LLMs for best performance.

Read More
AI Generative AI Large Language Models Livestream

LLMs, Latent Spaces, and More

This video is from FranksWorldTV.

Read More