LessWrong (30+ Karma)

By LessWrong

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store and Apple App Store.

Image by LessWrong

Category: Technology

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 4
Reviews: 0
Episodes: 250

Description

Audio narrations of LessWrong posts.

Episode Date
“New 80k problem profile: extreme power concentration” by rosehadshar
Dec 12, 2025
“AI #146: Chipping In” by Zvi
Dec 12, 2025
“Annals of Counterfactual Han” by GenericModel
Dec 12, 2025
“Cognitive Tech from Algorithmic Information Theory” by Cole Wyeth
Dec 12, 2025
“Childhood and Education #15: Got To Get Out” by Zvi
Dec 12, 2025
“Weird Generalization & Inductive Backdoors” by Jorio Cocola, Owain_Evans, dylan_f
Dec 11, 2025
“If Anyone Builds It Everyone Dies, another semi-outsider review” by manueldelrio
Dec 11, 2025
“My AGI safety research—2025 review, ’26 plans” by Steven Byrnes
Dec 11, 2025
“North Sentinelese Post-Singularity” by Cleo Nardo
Dec 11, 2025
“Rock Paper Scissors is Not Solved, In Practice” by Linch
Dec 11, 2025
“MIRI Comms is hiring” by Duncan Sabien (Inactive)
Dec 11, 2025
“Gradual Disempowerment Monthly Roundup #3” by Raymond Douglas
Dec 11, 2025
“Follow-through on Bay Solstice” by Raemon
Dec 11, 2025
“Most Algorithmic Progress is Data Progress [Linkpost]” by Noosphere89
Dec 10, 2025
“Selling H200s to China Is Unwise and Unpopular” by Zvi
Dec 10, 2025
“The funding conversation we left unfinished” by jenn
Dec 10, 2025
“Human Dignity: a review” by owencb
Dec 10, 2025
“Insights into Claude Opus 4.5 from Pokémon” by Julian Bradshaw
Dec 10, 2025
“My experience running a 100k” by Alexandre Variengien
Dec 09, 2025
“[paper] Auditing Games for Sandbagging” by Jordan Taylor, Joseph Bloom
Dec 09, 2025
“Towards Categorization of Adlerian Excuses” by romeostevensit
Dec 09, 2025
“Every point of intervention” by TsviBT
Dec 09, 2025
“How Stealth Works” by Linch
Dec 09, 2025
“Reward Function Design: a starter pack” by Steven Byrnes
Dec 08, 2025
“We need a field of Reward Function Design” by Steven Byrnes
Dec 08, 2025
“2025 Unofficial LessWrong Census/Survey” by Screwtape
Dec 08, 2025
“Little Echo” by Zvi
Dec 08, 2025
“I said hello and greeted 1,000 people at 5am this morning” by Mr. Keating
Dec 08, 2025
“AI in 2025: gestalt” by technicalities
Dec 07, 2025
“Eliezer’s Unteachable Methods of Sanity” by Eliezer Yudkowsky
Dec 07, 2025
“Answering a child’s questions” by Alex_Altair
Dec 06, 2025
“The corrigibility basin of attraction is a misleading gloss” by Jeremy Gillen
Dec 06, 2025
“why america can’t build ships” by bhauth
Dec 06, 2025
“Help us find founders for new AI safety projects” by lukeprog
Dec 06, 2025
“Critical Meditation Theory” by lsusr
Dec 06, 2025
“Announcing: Agent Foundations 2026 at CMU” by David Udell, Alexander Gietelink Oldenziel, windows, Matt Dellago
Dec 06, 2025
“An Ambitious Vision for Interpretability” by leogao
Dec 05, 2025
“Journalist’s inquiry into a core organiser breaking his nonviolence commitment and leaving Stop AI” by Remmelt
Dec 05, 2025
“Is Friendly AI an Attractor? Self-Reports from 22 Models Say Probably Not” by Josh Snider
Dec 05, 2025
“Epistemology of Romance, Part 2” by DaystarEld
Dec 05, 2025
“Center on Long-Term Risk: Annual Review & Fundraiser 2025” by Tristan Cook
Dec 05, 2025
“AI #145: You’ve Got Soul” by Zvi
Dec 05, 2025
“The behavioral selection model for predicting AI motivations” by Alex Mallen, Buck
Dec 04, 2025
[Linkpost] “Embedded Universal Predictive Intelligence” by Cole Wyeth
Dec 04, 2025
“Categorizing Selection Effects” by romeostevensit
Dec 04, 2025
“Front-Load Giving Because of Anthropic Donors?” by jefftk
Dec 04, 2025
“Beating China to ASI” by PeterMcCluskey
Dec 04, 2025
“On Dwarkesh Patel’s Second Interview With Ilya Sutskever” by Zvi
Dec 04, 2025
“Racing For AI Safety™ was always a bad idea, right?” by Wei Dai
Dec 03, 2025
“6 reasons why ‘alignment-is-hard’ discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes
Dec 03, 2025
“Human art in a post-AI world should be strange” by Abhishaike Mahajan
Dec 03, 2025
“Effective Pizzaism” by Screwtape
Dec 03, 2025
“Becoming a Chinese Room” by Raelifin
Dec 02, 2025
“Reward Mismatches in RL Cause Emergent Misalignment” by Zvi
Dec 02, 2025
“Future Proofing Solstice” by Raemon
Dec 02, 2025
“MIRI’s 2025 Fundraiser” by alexvermeer
Dec 02, 2025
“The 2024 LessWrong Review” by RobertM
Dec 01, 2025
“A Statistical Analysis of Inkhaven” by Ben Pace
Dec 01, 2025
“Announcing: OpenAI’s Alignment Research Blog” by Naomi Bashkansky
Dec 01, 2025
“Interview: What it’s like to be a bat” by Saul Munn
Dec 01, 2025
“How Can Interpretability Researchers Help AGI Go Well?” by Neel Nanda
Dec 01, 2025
“A Pragmatic Vision for Interpretability” by Neel Nanda
Dec 01, 2025
[Linkpost] “How middle powers may prevent the development of artificial superintelligence” by Alex Amadori, Gabriel Alfour, Andrea_Miotti, Eva_B
Dec 01, 2025
“Claude Opus 4.5 Is The Best Model Available” by Zvi
Dec 01, 2025
“Insulin Resistance and Glycemic Index” by lsusr
Dec 01, 2025
“November Retrospective” by johnswentworth
Dec 01, 2025
“Inkhaven Retrospective” by abramdemski
Nov 30, 2025
“Explosive Skill Acquisition” by Ben Goldhaber
Nov 30, 2025
“A Blogger’s Guide To The 21st Century” by Screwtape
Nov 30, 2025
“Ben’s 10 Tips for Event Feedback Forms” by Ben Pace
Nov 30, 2025
“The Moonrise Problem” by johnswentworth
Nov 30, 2025
College life with short AGI timelines
Nov 30, 2025
“The Joke” by Ape in the coat
Nov 30, 2025
“I wrote a blog post every day for a month, and all I got was this lousy collection of incoherent ramblings” by Dentosal
Nov 30, 2025
Change My Mind: The Rationalist Community is a Gift Economy
Nov 29, 2025
Epistemology of Romance, Part 1
Nov 29, 2025
“A Harried Meeting” by Ben Pace
Nov 29, 2025
Drugs Aren’t A Moral Category
Nov 29, 2025
Claude 4.5 Opus’ Soul Document
Nov 29, 2025
Should you work with evil people?
Nov 29, 2025
Unless its governance changes, Anthropic is untrustworthy
Nov 29, 2025
The Missing Genre: Heroic Parenthood - You can have kids and still punch the sun
Nov 29, 2025
“Tests of LLM introspection need to rule out causal bypassing” by Adam Morris, Dillon Plunkett
Nov 29, 2025
Not A Love Letter, But A Thank You Letter
Nov 29, 2025
“Ruby’s Ultimate Guide to Thoughtful Gifts” by Ruby
Nov 28, 2025
Writing advice: Why people like your quick bullshit takes better than your high-effort posts
Nov 28, 2025
A Thanksgiving Memory
Nov 28, 2025
A Taxonomy of Bugs (Lists)
Nov 28, 2025
Claude Opus 4.5: Model Card, Alignment and Safety
Nov 28, 2025
“Despair, Serenity, Song and Nobility in ‘Hollow Knight: Silksong’” by Ben Pace
Nov 28, 2025
The Best Lack All Conviction: A Confusing Day in the AI Village
Nov 28, 2025
Will We Get Alignment by Default? — with Adrià Garriga-Alonso
Nov 28, 2025
Information Hygiene
Nov 28, 2025
You Are Much More Salient To Yourself Than To Everyone Else
Nov 28, 2025
“Subliminal Learning Across Models” by draganover, Andi Bhongade, tolgadur, Mary Phuong, LASR Labs
Nov 27, 2025
Alignment remains a hard, unsolved problem
Nov 27, 2025
Just explain it to someone
Nov 27, 2025
Principles of a Rationality Dojo
Nov 26, 2025
Postmodernism for STEM Types: A Clear-Language Guide to Conflict Theory
Nov 26, 2025
“Training PhD Students to be Fat Newts (Part 2)” by alkjash
Nov 26, 2025
Snippets on Living In Reality
Nov 26, 2025
Courtship Confusions Post-Slutcon
Nov 26, 2025
Training PhD Students to be Fat Newts (Part 1)
Nov 26, 2025
“Evaluating honesty and lie detection techniques on a diverse suite of dishonest models” by Sam Marks, Johannes Treutlein, evhub, Fabien Roger
Nov 26, 2025
Takeaways from the Eleos Conference on AI Consciousness and Welfare
Nov 26, 2025
Evolution & Freedom
Nov 26, 2025
Reasons Why I Cannot Sleep
Nov 26, 2025
The Economics of Replacing Call Center Workers With AIs
Nov 26, 2025
Three things that surprised me about technical grantmaking at Coefficient Giving (fka Open Phil)
Nov 26, 2025
“OpenAI finetuning metrics: What is going on with the loss curves?” by jorio, James Chua
Nov 26, 2025
Alignment will happen by default. What’s next?
Nov 25, 2025
“Maybe Insensitive Functions are a Natural Ontology Generator?” by johnswentworth
Nov 25, 2025
The Enemy Gets The Last Hit
Nov 24, 2025
Reasoning Models Sometimes Output Illegible Chains of Thought
Nov 24, 2025
The Coalition
Nov 24, 2025
Gemini 3 Pro Is a Vast Intelligence With No Spine
Nov 24, 2025
“The LessWrong Team Was Selling Dollars For 86 Cents” by Screwtape
Nov 24, 2025
NATO is dangerously unaware of its military vulnerability
Nov 24, 2025
Inkhaven Retrospective
Nov 24, 2025
“Stop Applying And Get To Work” by plex
Nov 23, 2025
Show Review: Masquerade
Nov 23, 2025
You can just do things
Nov 23, 2025
I’ll be sad to lose the puzzles
Nov 23, 2025
Literacy is Decreasing Among the Intellectual Class
Nov 23, 2025
Traditional Food
Nov 23, 2025
Easy vs Hard Emotional Vulnerability
Nov 23, 2025
What kind of person is DeepSeek’s founder, Liang Wenfeng? An answer from his old university classmate.
Nov 23, 2025
OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist
Nov 23, 2025
Eight Heuristics of Anti-Epistemology
Nov 23, 2025
Market Logic I
Nov 22, 2025
“Book Review: Wizard’s Hall” by Screwtape
Nov 22, 2025
D&D.Sci Thanksgiving: the Festival Feast
Nov 22, 2025
Be Naughty
Nov 22, 2025
Abstract advice to researchers tackling the difficult core problems of AGI alignment
Nov 22, 2025
Why Not Just Train For Interpretability?
Nov 22, 2025
“Natural emergent misalignment from reward hacking in production RL” by evhub, Monte M, Benjamin Wright, Jonathan Uesato
Nov 21, 2025
“AI #143: Everything, Everywhere, All At Once” by Zvi
Nov 21, 2025
“Rescuing truth in mathematics from the Liar’s Paradox using fuzzy values” by Adrià Garriga-alonso
Nov 21, 2025
Contra Collisteru: You Get About One Carthage
Nov 21, 2025
Reading My Diary: 10 Years Since CFAR
Nov 21, 2025
What Do We Tell the Humans? Errors, Hallucinations, and Lies in the AI Village
Nov 21, 2025
“Evrart Claire: A Case Study in Anti-Epistemology” by Ben Pace
Nov 21, 2025
“The Boring Part of Bell Labs” by Elizabeth
Nov 21, 2025
“[Paper] Output Supervision Can Obfuscate the CoT” by jacob_drori, lukemarks, cloud, TurnTrout
Nov 20, 2025
“Dominance: The Standard Everyday Solution To Akrasia” by johnswentworth
Nov 20, 2025
Gemini 3 is Evaluation-Paranoid and Contaminated
Nov 20, 2025
“Thinking about reasoning models made me less worried about scheming” by Fabien Roger
Nov 20, 2025
“What Is The Basin Of Convergence For Kelly Betting?” by johnswentworth
Nov 20, 2025
“In Defense of Goodness” by abramdemski
Nov 20, 2025
“Out-paternalizing the government (getting oxygen for my baby)” by Ruby
Nov 20, 2025
“Beren’s Essay on Obedience and Alignment” by StanislavKrym
Nov 20, 2025
“Preventing covert ASI development in countries within our agreement” by Aaron_Scher
Nov 20, 2025
“Current LLMs seem to rarely detect CoT tampering” by Bart Bussmann, Arthur Conmy, Neel Nanda, Senthooran Rajamanoharan, Josh Engels, Bartosz Cywiński
Nov 19, 2025
“The Bughouse Effect” by TsviBT
Nov 19, 2025
“Serious Flaws in CAST” by Max Harms
Nov 19, 2025
“Memories of a British Boarding School #2” by Ben Pace
Nov 19, 2025
“Automate, automate it all” by habryka
Nov 19, 2025
“How the aliens next door shower” by Ruby
Nov 19, 2025
“Victor Taelin’s notes on Gemini 3” by Gunnar_Zarncke
Nov 19, 2025
“Anthropic is (probably) not meeting its RSP security commitments” by habryka
Nov 19, 2025
“Considerations for setting the FLOP thresholds in our example international AI agreement” by peterbarnett, Aaron_Scher
Nov 19, 2025
“On Writing #2” by Zvi
Nov 18, 2025
“New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence” by Aaron_Scher, David Abecassis, Brian Abeyta, peterbarnett
Nov 18, 2025
“Status Is The Game Of The Losers’ Bracket” by johnswentworth
Nov 18, 2025
“Eat The Richtext” by dreeves
Nov 18, 2025
“Small batches and the mythical single piece flow” by habryka
Nov 18, 2025
“How Colds Spread” by RobertM
Nov 18, 2025
“Middlemen Are Eating the World (And That’s Good, Actually)” by Linch
Nov 18, 2025
“Why is American mass-market tea so terrible?” by RobertM
Nov 18, 2025
“An Analogue Of Set Relationships For Distribution” by johnswentworth, David Lorell
Nov 18, 2025
“AI 2025 - Last Shipmas” by Simon Lermen
Nov 18, 2025
“Varieties Of Doom” by jdp
Nov 18, 2025
“Mediators: a different route through conflict” by Ben Pace
Nov 17, 2025
“Lobsang’s Children” by Tomás B.
Nov 17, 2025
“Close open loops” by habryka
Nov 17, 2025
“Video games are philosophy’s playground” by Rachel Shu
Nov 17, 2025
“Mixed Feelings on Social Munchkinry” by Screwtape
Nov 17, 2025
“Diagonalization: A (slightly) more rigorous model of paranoia” by habryka
Nov 17, 2025
“Where is the Capital? An Overview” by johnswentworth
Nov 17, 2025
“Matrices map between biproducts” by jessicata
Nov 16, 2025
“Why does ChatGPT think mammoths were alive December?” by Steffee
Nov 16, 2025
“7 Vicious Vices of Rationalists” by Ben Pace
Nov 16, 2025
“The skills and physics of high-performance driving, Pt. 1” by Ruby
Nov 16, 2025
“Put numbers on stuff, all the time, otherwise scope insensitivity will eat you” by habryka
Nov 16, 2025
“AI safety undervalues founders” by Ryan Kidd
Nov 16, 2025
“Your Clone Wants to Kill You Because You Lack Self Knowledge” by Algon
Nov 16, 2025
“Don’t use the phrase ‘human values’” by Nina Panickssery
Nov 15, 2025
“Generation Ship: A Protest Song For PauseAI” by LoganStrohl
Nov 15, 2025
“Increasing marginal returns to effort are common” by habryka
Nov 15, 2025
“‘But You’d Like To Feel Companionate Love, Right? ... Right?’” by johnswentworth
Nov 15, 2025
“Understanding and Controlling LLM Generalization” by Daniel Tan
Nov 15, 2025
“AI Craziness: Additional Suicide Lawsuits and The Fate of GPT-4o” by Zvi
Nov 15, 2025
“AI Corrigibility Debate: Max Harms vs. Jeremy Gillen” by Liron, Max Harms, Jeremy Gillen
Nov 14, 2025
“10” by Ben Pace
Nov 14, 2025
“Everyone has a plan until they get lied to the face” by Screwtape
Nov 14, 2025
“The rare, deadly virus lurking in the Southwest US, and the bigger picture” by eukaryote
Nov 14, 2025
“Creditworthiness should not be for sale” by habryka
Nov 14, 2025
“Types of systems that could be useful for agent foundations” by Alex_Altair
Nov 14, 2025
“The Charge of the Hobby Horse” by TsviBT
Nov 14, 2025
“Two can keep a secret if one is dead. So please share everything with at least one person.” by habryka
Nov 14, 2025
“Why Truth First?” by johnswentworth
Nov 14, 2025
“Orient Speed in the 21st Century” by Raemon
Nov 14, 2025
“Tell people as early as possible it’s not going to work out” by habryka
Nov 14, 2025
“Epistemic Spot Check: Expected Value of Donating to Alex Bores’s Congressional Campaign” by MichaelDickens
Nov 14, 2025
“(Fantasy) -> (Planning): A Core Mental Move For Agentic Humans?” by johnswentworth
Nov 14, 2025
“Weight-sparse transformers have interpretable circuits” by leogao
Nov 13, 2025
“What’s so hard about...? A question worth asking” by Ruby
Nov 13, 2025
“Paranoia rules everything around me” by habryka
Nov 13, 2025
“Favorite quotes from ‘High Output Management’” by Nina Panickssery
Nov 13, 2025
“The Pope Offers Wisdom” by Zvi
Nov 13, 2025
“Introducing faruvc.org” by jefftk
Nov 12, 2025
“Please, Don’t Roll Your Own Metaethics” by Wei Dai
Nov 12, 2025
“Warning Aliens About the Dangerous AI We Might Create” by James_Miller, avturchin
Nov 12, 2025
“Do not hand off what you cannot pick up” by habryka
Nov 12, 2025
“5 Things I Learned After 10 Days of Inkhaven” by Ben Pace
Nov 12, 2025
“How I Learned That I Don’t Feel Love” by johnswentworth
Nov 12, 2025
“Consciousness as a Distributed Ponzi Scheme” by abramdemski
Nov 12, 2025
“Kimi K2 Thinking” by Zvi
Nov 11, 2025
“France is ready to stand alone” by Lucie Philippon
Nov 11, 2025
“Steering Language Models with Weight Arithmetic” by Fabien Roger, constanzafierro
Nov 11, 2025
“The problem of graceful deference” by TsviBT
Nov 11, 2025
“How likely is dangerous AI in the short term?” by Nikola Jurkovic
Nov 11, 2025
“Questioning the Requirements” by habryka
Nov 11, 2025
“Andrej Karpathy on LLM cognitive deficits” by Nina Panickssery
Nov 11, 2025
[Linkpost] “Untitled Draft” by Gabriel Alfour
Nov 10, 2025
“An Ontology for AI Cults and Cyber Egregores” by Jan_Kulveit
Nov 10, 2025
“Myopia Mythology” by abramdemski
Nov 10, 2025
“Three Kinds Of Ontological Foundations” by johnswentworth
Nov 10, 2025
“Learning information which is full of spiders” by Screwtape
Nov 10, 2025
[Linkpost] “Book Announcement: The Gentle Romance” by Richard_Ngo
Nov 10, 2025
“Manifest X DC Opening Benediction - Making Friends Along the Way” by JohnofCharleston
Nov 10, 2025
“Problems I’ve Tried to Legibilize” by Wei Dai
Nov 10, 2025
“Condensation” by abramdemski
Nov 09, 2025
“One Shot Singalonging is an attitude, not a skill or song-difficulty-level” by Raemon
Nov 09, 2025
“Insofar As I Think LLMs ‘Don’t Really Understand Things’, What Do I Mean By That?” by johnswentworth
Nov 09, 2025
“Omniscaling to MNIST” by cloud
Nov 08, 2025
“Comparing Payor & Löb” by abramdemski
Nov 08, 2025
“Against ‘You can just do things’” by zroe1
Nov 08, 2025
“Unexpected Things that are People” by Ben Goldhaber
Nov 08, 2025
“Escalation and perception” by TsviBT
Nov 08, 2025
“Entity Review: Pythia” by plex
Nov 08, 2025
“Mourning a life without AI” by Nikola Jurkovic
Nov 08, 2025
“AI is not inevitable.” by David Scott Krueger (formerly: capybaralet)
Nov 08, 2025
“Anthropic & Dario’s dream” by Simon Lermen
Nov 08, 2025
“13 Arguments About a Transition to Neuralese AIs” by Rauno Arike
Nov 07, 2025
“AI Safety’s Berkeley Bubble and the Allies We’re Not Even Trying to Recruit” by Mr. Counsel
Nov 07, 2025
“A country of alien idiots in a datacenter: AI progress and public alarm” by Seth Herd
Nov 07, 2025
[Linkpost] “The Hawley-Blumenthal AI Risk Evaluation Act” by David Abecassis
Nov 07, 2025
“Two easy digital intentionality practices” by mingyuan
Nov 07, 2025
“Toward Statistical Mechanics Of Interfaces Under Selection Pressure” by johnswentworth, David Lorell
Nov 07, 2025