Interconnects

By Nathan Lambert

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store and Apple App Store.

Image by Nathan Lambert

Category: Technology

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 16
Reviews: 0
Episodes: 147

Description

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories.

www.interconnects.ai

Episode Date
The distillation panic
May 04, 2026
My bets on open models, mid-2026
Apr 15, 2026
The inevitable need for an open model consortium
Apr 11, 2026
Claude Mythos and misguided open-weight fearmongering
Apr 09, 2026
Gemma 4 and what makes an open model succeed
Apr 03, 2026
Lossy self-improvement
Mar 22, 2026
GPT 5.4 is a big step for Codex
Mar 18, 2026
What comes next with open models
Mar 16, 2026
Dean Ball on open models and government control
Mar 06, 2026
Olmo Hybrid and future LLM architectures
Mar 05, 2026
How much does distillation really matter for Chinese LLMs?
Feb 24, 2026
Opus 4.6, Codex 5.3, and the post-benchmark era
Feb 09, 2026
Why Nvidia builds open models with Bryan Catanzaro
Feb 04, 2026
Thoughts on the job market in the age of LLMs
Jan 30, 2026
Arcee AI goes all-in on open models built in the U.S.
Jan 27, 2026
Get Good at Agents
Jan 21, 2026
Use multiple models
Jan 11, 2026
Claude Code Hits Different
Jan 09, 2026
Open models: Hot or Not with Nathan Lambert & Florian Brand
Dec 18, 2025
New Talk: Building Olmo 3 Think
Dec 10, 2025
Olmo 3: America’s truly open reasoning models
Nov 20, 2025
Why AI writing is mid
Nov 17, 2025
Interview: Ant Group's open model ambitions
Nov 12, 2025
5 Thoughts on Kimi K2 Thinking
Nov 06, 2025
Burning out
Oct 25, 2025
How to scale RL
Oct 20, 2025
The State of Open Models
Oct 16, 2025
Thoughts on The Curve
Oct 07, 2025
ChatGPT: The Agentic App
Sep 30, 2025
Thinking, Searching, and Acting
Sep 22, 2025
Coding as the epicenter of AI progress and the path to general agents
Sep 18, 2025
On China's open source AI trajectory
Sep 09, 2025
Ranking the Chinese Open Model Builders
Aug 17, 2025
Contra Dwarkesh on Continual Learning
Aug 15, 2025
GPT-5 and the arc of progress
Aug 07, 2025
gpt-oss: OpenAI validates the open ecosystem (finally)
Aug 05, 2025
Towards American Truly Open Models: The ATOM Project
Aug 04, 2025
Interviewing Ross Taylor on the state of AI: Chinese open models, scaling reasoning, useful tools, and what comes next
Jul 29, 2025
The White House's plan for open models & AI research in the U.S.
Jul 23, 2025
Kimi K2 and when "DeepSeek Moments" become normal
Jul 14, 2025
The American DeepSeek Project
Jul 04, 2025
Some ideas for what comes next
Jun 23, 2025
Crafting a good (reasoning) model
Jun 18, 2025
The rise of reasoning machines
Jun 12, 2025
What comes next with reinforcement learning
Jun 09, 2025
How I Write
Jun 06, 2025
A taxonomy for next-generation reasoning models
Jun 04, 2025
Claude 4 and Anthropic's bet on code
May 27, 2025
People use AI more than you think
May 21, 2025
My path into AI
May 14, 2025
What people get wrong about the leading Chinese open models: Adoption and censorship
May 06, 2025
State of play of AI progress (and related brakes on an intelligence explosion)
Apr 30, 2025
Transparency and (shifting) priority stacks
Apr 28, 2025
OpenAI's o3: Over-optimization is back and weirder than ever
Apr 19, 2025
OpenAI's GPT-4.1 and separating the API from ChatGPT
Apr 14, 2025
Llama 4: Did Meta just push the panic button?
Apr 07, 2025
RL backlog: OpenAI's many RLs, clarifying distillation, and latent reasoning
Apr 05, 2025
Gemini 2.5 Pro and Google's second chance with AI
Mar 26, 2025
Managing frontier model training organizations (or teams)
Mar 19, 2025
Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
Mar 13, 2025
Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL
Mar 12, 2025
Elicitation, the simplest way to understand post-training
Mar 10, 2025
Where inference-time scaling pushes the market for AI companies
Mar 05, 2025
GPT-4.5: "Not a frontier model"?
Feb 28, 2025
Character training: Understanding and crafting a language model's personality
Feb 26, 2025
Claude 3.7 thonks and what's next for inference-time scaling
Feb 24, 2025
Grok 3 and an accelerating AI roadmap
Feb 18, 2025
An unexpected RL Renaissance
Feb 13, 2025
Deep Research, information vs. insight, and the nature of science
Feb 12, 2025
Making the U.S. the home for open-source AI
Feb 05, 2025
Why reasoning models will generalize
Jan 28, 2025
Interviewing OLMo 2 leads: Open secrets of training language models
Jan 22, 2025
DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs
Jan 21, 2025
Let me use my local LMs on Meta Ray-Bans
Jan 15, 2025
(Voiceover) DeepSeek V3 and the actual cost of training frontier AI models
Jan 09, 2025
The state of post-training in 2025
Jan 08, 2025
Quick recap on the state of reasoning
Jan 02, 2025
(Voiceover) 2024 Interconnects year in review
Dec 31, 2024
(Voiceover) OpenAI's o3: The grand finale of AI in 2024
Dec 20, 2024
(Voiceover) The AI agent spectrum
Dec 18, 2024
(Voiceover) OpenAI's Reinforcement Finetuning and RL for the masses
Dec 11, 2024
Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning
Dec 05, 2024
(Voiceover) OpenAI's o1 using "search" was a PSYOP
Dec 04, 2024
(Voiceover) OLMo 2 and building effective teams for training language models
Nov 26, 2024
(Voiceover) Tülu 3: The next era in open post-training
Nov 21, 2024
(Voiceover) Scaling realities
Nov 14, 2024
(Voiceover) Saving the National AI Research Resource & my AI policy outlook
Nov 13, 2024
Interviewing Tim Dettmers on open-source AI: Agents, scaling, quantization and what's next
Nov 07, 2024
Interviewing Andrew Carr of Cartwheel on the State of Generative AI
Oct 31, 2024
(Voiceover) Why I build open language models
Oct 30, 2024
(Voiceover) Claude's agentic future and the current state of the frontier models
Oct 23, 2024
Interviewing Arvind Narayanan on making sense of AI hype
Oct 17, 2024
(Voiceover) Building on evaluation quicksand
Oct 16, 2024
Interviewing Andrew Trask on how language models should store (and access) information
Oct 10, 2024
How scaling changes model behavior
Oct 09, 2024
[Article Voiceover] AI Safety's Crux: Culture vs. Capitalism
Oct 02, 2024
Interviewing Riley Goodside on the science of prompting
Sep 30, 2024
[Article Voiceover] Llama 3.2 Vision and Molmo: Foundations for the multimodal open-source ecosystem
Sep 27, 2024
[Article Voiceover] Reverse engineering OpenAI's o1
Sep 17, 2024
Futures of the data foundry business model
Sep 11, 2024
A post-training approach to AI regulation with Model Specs
Sep 10, 2024
OpenAI's Strawberry, LM self-talk, inference scaling laws, and spending more on inference
Sep 05, 2024
OLMoE and the hidden simplicity in training better foundation models
Sep 04, 2024
On the current definitions of open-source AI and the state of the data commons
Aug 28, 2024
Nous Hermes 3 and exploiting underspecified evaluations
Aug 16, 2024
Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents
Aug 08, 2024
A recipe for frontier model post-training
Aug 07, 2024
Interviewing Sebastian Raschka on the state of open LLMs, Llama 3.1, and AI education
Aug 01, 2024
GPT-4o-mini changed ChatBotArena
Jul 31, 2024
Llama 3.1 405b, Meta's AI strategy, and the new open frontier model ecosystem
Jul 23, 2024
SB 1047, AI regulation, and unlikely allies for open models
Jul 17, 2024
Switched to Claude 3.5
Jul 03, 2024
Interviewing Dean Ball on AI policy: CA SB 1047, upcoming AI disaster response, Llama 3 405B, Chinese open-source AI, and scaling laws
Jun 27, 2024
RLHF Roundup: Trying to get good at PPO, charting RLHF's impact, RewardBench retrospective, and a reward model competition
Jun 26, 2024
Frontiers in synthetic data
Jun 21, 2024
Text-to-video AI is already abundant
Jun 18, 2024
AI for the rest of us
Jun 12, 2024
A realistic path to robotic foundation models
Jun 05, 2024
We aren't running out of training data, we are running out of open training data
May 29, 2024
Name, image, and AI's likeness
May 22, 2024
OpenAI chases Her
May 16, 2024
OpenAI's Model (behavior) Spec, RLHF transparency, and personalization questions
May 13, 2024
RLHF: A thin line between useful and lobotomized
May 01, 2024
Phi 3 and Arctic: Outlier LMs are hints
Apr 30, 2024
AGI is what you want it to be
Apr 24, 2024
Llama 3: Scaling open LLMs to AGI
Apr 21, 2024
Stop "reinventing" everything to "solve" alignment
Apr 17, 2024
The end of the "best open LLM"
Apr 15, 2024
Why we disagree on what open-source AI should be
Apr 03, 2024
DBRX: The new best open LLM and Databricks' ML strategy
Mar 29, 2024
Evaluations: Trust, performance, and price (bonus, announcing RewardBench)
Mar 21, 2024
Model commoditization and product moats
Mar 13, 2024
The koan of an open-source LLM
Mar 06, 2024
Interviewing Louis Castricato of Synth Labs and Eleuther AI on RLHF, Gemini Drama, DPO, founding Carper AI, preference data, reward models, and everything in between
Mar 04, 2024
How to cultivate a high-signal AI feed
Feb 28, 2024
Google ships it: Gemma open LLMs and Gemini backlash
Feb 22, 2024
10 Sora and Gemini 1.5 follow-ups: code-base in context, deepfakes, pixel-peeping, inference costs, and more
Feb 20, 2024
Releases! OpenAI’s Sora for video, Gemini 1.5's infinite context, and a secret Mistral model
Feb 16, 2024
Why reward models are still key to understanding alignment
Feb 14, 2024
Alignment-as-a-Service: Scale AI vs. the new guys
Feb 07, 2024
Open Language Models (OLMos) and the LLM landscape
Feb 01, 2024
Model merging lessons in The Waifu Research Department
Jan 29, 2024
Local LLMs, some facts some fiction
Jan 24, 2024
Multimodal blogging: My AI tools to expand your audience
Jan 17, 2024
Multimodal LM roundup: Unified IO 2, inputs and outputs, Gemini, LLaVA-RLHF, and RLHF questions
Jan 10, 2024
Where 2024’s “open GPT4” can’t match OpenAI’s
Jan 05, 2024
Interviewing Tri Dao and Michael Poli of Together AI on the future of LLM architectures
Dec 21, 2023