LessWrong (30+ Karma)

By LessWrong

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store and Apple App Store.

Image by LessWrong

Category: Technology

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 4
Reviews: 0
Episodes: 250

Description

Audio narrations of LessWrong posts.

Episode Date
“Bad Problems Don’t Stop Being Bad Because Somebody’s Wrong About Fault Analysis” by Linch
May 09, 2026
“Write Cause You Have Something to Say” by Logan Riggs
May 08, 2026
“AI is Breaking Two Vulnerability Cultures” by jefftk
May 08, 2026
“Is ProgramBench Impossible?” by frmsaul
May 08, 2026
“Bringing More Expertise to Bear on Alignment” by Edmund Lau, Geoffrey Irving, Cameron Holmes, David Africa
May 08, 2026
[Linkpost] “How to prevent AI’s 2008 moment (We’re hiring)” by felixgaston
May 08, 2026
“AI #167: The Prior Restraint Era Begins” by Zvi
May 08, 2026
“Mechanistic estimation for wide random MLPs” by Jacob_Hilton
May 07, 2026
“Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations” by Subhash Kantamneni, kitft, Euan Ong, Sam Marks
May 07, 2026
“Try, even if they have you cold” by WalterL
May 07, 2026
“A review of “Investigating the consequences of accidentally grading CoT during RL”” by Buck
May 07, 2026
“There is no evidence you should reapply sunscreen every 2 hours.” by Hide
May 07, 2026
“Many individual CEVs are probably quite bad” by Viliam
May 06, 2026
“x-risk-themed” by kave
May 06, 2026
“What is Anthropic?” by Zvi
May 06, 2026
“What if LLMs are mostly crystallized intelligence?” by deep
May 06, 2026
“Your rights when flying to Europe” by Yair Halberstadt
May 06, 2026
“Model Spec Midtraining: Improving How Alignment Training Generalizes” by Chloe Li, saraprice, Sam Marks, Jonathan Kutasov
May 06, 2026
“Motivated reasoning, confirmation bias, and AI risk theory” by Seth Herd
May 05, 2026
“The AI Ad-Hoc Prior Restraint Era Begins” by Zvi
May 05, 2026
“Are you looking up?” by Craig Green
May 05, 2026
[Linkpost] “Interpreting Language Model Parameters” by Lucius Bushnaq, Dan Braun, Oliver Clive-Griffin, Bart Bussmann, Nathan Hu, mivanitskiy, Linda Linsefors, Lee Sharkey
May 05, 2026
“Housing Roundup #15: The War Against Renters” by Zvi
May 05, 2026
“It’s nice of you to worry about me, but I really do have a life” by Viliam
May 04, 2026
“Irretrievability; or, Murphy’s Curse of Oneshotness upon ASI” by Eliezer Yudkowsky
May 04, 2026
“AI Industrial Takeoff — Part 1: Maximum growth rates with current technology” by djbinder
May 04, 2026
“Taking woo seriously but not literally” by Kaj_Sotala
May 04, 2026
“Dairy cows make their misery expensive (but their calves can’t)” by Elizabeth
May 03, 2026
“Measuring the ability of Opus 4.5 to fool narrow classifiers” by Fabien Roger, John Hughes
May 03, 2026
“A new rationalist self-improvement book: the 12 Levers” by spencerg
May 03, 2026
“OpenAI’s red line for AI self-improvement is fundamentally flawed” by Charbel-Raphaël
May 03, 2026
“You Are Not Immune To Mode Collapse” by J Bostock
May 02, 2026
“Primary Care Physicians are Incompetent. We Need More of Them.” by Hide
May 02, 2026
“How Go Players Disempower Themselves to AI” by Ashe Vazquez Nuñez
May 02, 2026
“How much should the ideal person cry wolf?” by KatjaGrace
May 01, 2026
“Conditional misalignment: Mitigations can hide EM behind contextual cues” by Jan Dubiński, Owain_Evans
May 01, 2026
“Risk from fitness-seeking AIs: mechanisms and mitigations” by Alex Mallen
May 01, 2026
“Sanity-checking “Incompressible Knowledge Probes”” by Sturb, LawrenceC
May 01, 2026
“AI unemployment and AI extinction are often the same” by KatjaGrace
May 01, 2026
“AI risk was not invested by AI CEOs to hype their companies” by KatjaGrace
May 01, 2026
“Cyborg evals” by Eye You, frmsaul
Apr 30, 2026
“To what extent is Qwen3-32B predicting its persona?” by Arjun Khandelwal, ryan_greenblatt, Alex Mallen
Apr 30, 2026
“Research Sabotage in ML Codebases” by egan
Apr 30, 2026
“Maybe I was too harsh on deep learning theory (three days ago)” by LawrenceC
Apr 30, 2026
“Notes on Transformer Consciousness” by slavachalnev
Apr 30, 2026
“On today’s panel with Bernie Sanders” by David Scott Krueger
Apr 30, 2026
“No Strong Orthogonality From Selection Pressure” by lumpenspace
Apr 30, 2026
“Learning zero, and what SLT gets wrong about it” by Dmitry Vaintrob
Apr 30, 2026
“The Most Important Charts In The World” by Zvi
Apr 30, 2026
“LLM Style Slop is Absolutely Everywhere” by silentbob
Apr 29, 2026
“Goblin Mode, 24 Hours Later” by Dylan Bowman
Apr 29, 2026
“Let Kids Keep More Productivity Gains” by jefftk
Apr 29, 2026
“llm assistant personas seem increasingly incoherent (some subjective observations)” by nostalgebraist
Apr 29, 2026
“Not a Paper: “Frontier Lab CEOs are Capable of In-Context Scheming”” by LawrenceC
Apr 29, 2026
“The Problem in the “Nerd Sniping” xkcd Comic” by peralice
Apr 29, 2026
“Recursive forecasting: Eliciting long-term forecasts from myopic fitness-seekers” by Jozdien, Alex Mallen
Apr 28, 2026
“Contra Binder on far-UVC and filtration” by jefftk
Apr 28, 2026
“Takes from two months as an aspiring LLM naturalist” by AnnaSalamon
Apr 28, 2026
“Forecasting is Not Overrated and It’s Probably Funded Appropriately” by Ben S.
Apr 28, 2026
“On the political feasibility of stopping AI” by David Scott Krueger
Apr 28, 2026
“Sleeper Agent Backdoor Results Are Messy” by Sebastian Prasanna, Alek Westover, Dylan Xu, Vivek Hebbar, Julian Stastny
Apr 28, 2026
“GPT 5.5: The System Card” by Zvi
Apr 28, 2026
“LessWrong Shows You Social Signals Before the Comment” by TurnTrout
Apr 28, 2026
“Fail safe(r) at alignment by channeling reward-hacking into a “spillway” motivation” by Anders Cairns Woodruff, Alex Mallen
Apr 27, 2026
“Curious cases of financial engineering in biotech” by Abhishaike Mahajan
Apr 27, 2026
“Update on the Alex Bores campaign” by Eric Neyman
Apr 27, 2026
“In defense of parents” by Yair Halberstadt
Apr 27, 2026
“AI companies should publish security assessments” by ryan_greenblatt
Apr 27, 2026
“The other paper that killed deep learning theory” by LawrenceC
Apr 27, 2026
“What holds AI safety together? Co-authorship networks from 200 papers” by Anna Thieser
Apr 27, 2026
″“Bad faith” means intentionally misrepresenting your beliefs” by TFD
Apr 27, 2026
“Retrospective on my unsupervised elicitation challenge” by DanielFilan
Apr 27, 2026
“Control protocols don’t always need to know which models are scheming” by Fabien Roger
Apr 26, 2026
“Anthropic spent too much don’t-be-annoying capital on Mythos” by draganover
Apr 26, 2026
“The paper that killed deep learning theory” by LawrenceC
Apr 26, 2026
“Forecasting is Way Overrated, and We Should Stop Funding It” by mabramov
Apr 25, 2026
″“Thinkhaven”” by Raemon
Apr 25, 2026
“Is the Cat Out of the Bag?: Who knows how to make AGI?” by Oliver Sourbut
Apr 25, 2026
“Against the “Permanent” Underclass” by Marcus Plutowski
Apr 25, 2026
“Quick Paper Review: “There Will Be a Scientific Theory of Deep Learning”” by LawrenceC
Apr 25, 2026
“Protecting Cognitive Integrity: Our internal AI use policy (V1)” by Tom DAVID
Apr 24, 2026
“Methodology for inferring propensities of LLMs” by Olli Järviniemi
Apr 24, 2026
“vLLM-Lens: Fast Interpretability Tooling That Scales to Trillion-Parameter Models” by Alan Cooney, Sid Black
Apr 24, 2026
“What Happens When a Model Thinks It Is AGI?” by josh :), David Africa
Apr 24, 2026
“Should We Train Against (CoT) Monitors?” by RohanS
Apr 23, 2026
“If Everyone Reads It, Nobody Dies - Course Launch” by Luc Brinkman, Chris-Lons
Apr 23, 2026
“Does your AI perform badly because you — you, specifically — are a bad person” by Natalie Cargill
Apr 23, 2026
“A “Lay” Introduction to “On the Complexity of Neural Computation in Superposition”” by LawrenceC
Apr 23, 2026
“AI #165: In Our Image” by Zvi
Apr 23, 2026
“An Angry Review of Greg Egan’s “Didicosm”” by LawrenceC
Apr 23, 2026
“Evil is bad, actually (Vassar and Olivia Schaefer)” by plex
Apr 23, 2026
“Your Supplies Probably Won’t Be Stolen in a Disaster” by jefftk
Apr 23, 2026
“Community misconduct disputes are not about facts” by mingyuan
Apr 23, 2026
“Why no new notations since 1960?” by Carl Feynman
Apr 23, 2026
“Opus 4.7 Part 3: Model Welfare” by Zvi
Apr 22, 2026
“Narrow Secret Loyalty Dodges Black-Box Audits” by Alfie Lamerton, Fabien Roger
Apr 22, 2026
“Opus 4.7 Part 2: Capabilities and Reactions” by Zvi
Apr 22, 2026
“10 posts I don’t have time to write” by habryka
Apr 22, 2026
“A taxonomy of barriers to trading with early misaligned AIs” by Alexa Pan
Apr 22, 2026
″$50 million a year for a 10% chance to ban ASI” by Andrea_Miotti, Alex Amadori, Gabriel Alfour
Apr 21, 2026
“Automated Deanonymization is Here” by jefftk
Apr 21, 2026
“Evil is bad, actually (Vassar and Olivia Schaefer callout post)” by plex
Apr 21, 2026
“10 non-boring ways I’ve used AI in the last month” by habryka
Apr 21, 2026
“Introducing LinuxArena” by Tyler Tracy, Ram Potham, Nick Kuhn, Myles H
Apr 21, 2026
“The “Budgeting” Skill Has The Most Betweenness Centrality (Probably)” by JenniferRM
Apr 20, 2026
“Opus 4.7 Part 1: The Model Card” by Zvi
Apr 20, 2026
“Finetuning Borges” by Linch
Apr 20, 2026
“9 kinds of hard-to-verify tasks” by Cleo Nardo
Apr 20, 2026
“How do LLMs generalize when we do training that is intuitively compatible with two off-distribution behaviors?” by dx26, Alek Westover, Vivek Hebbar, Sebastian Prasanna, Buck, Julian Stastny
Apr 20, 2026
“Automating philosophy if Timothy Williamson is correct” by Cleo Nardo
Apr 20, 2026
“CLR’s Safe Pareto Improvements Research Agenda” by Anthony DiGiovanni
Apr 20, 2026
“LLMs are about to disrupt algorithmic media feeds” by lsusr
Apr 20, 2026
“Resources for starting and growing an AI safety org” by Bryce Robertson, Søren Elverlin, Melissa Samworth, jakkdl
Apr 20, 2026
“Quality Matters Most When Stakes are Highest” by LawrenceC
Apr 20, 2026
“Feel like a room has bad vibes? The lighting is probably too “spiky” or too blue” by habryka
Apr 20, 2026
“I did a jhana meditation retreat (in 2024) with Jhourney and it was okay.” by Jules
Apr 20, 2026
“R1 CoT illegibility revisited” by nostalgebraist
Apr 19, 2026
“Reevaluating AGI Ruin in 2026” by lc
Apr 19, 2026
“If It’s Worth Arguing, It’s Worth Arguing With Whiteboards” by Drake Morrison
Apr 19, 2026
“There are only four skills: design, technical, management and physical” by habryka
Apr 19, 2026
“Having OCD is like living in North Korea (Here’s how I escaped)” by Declan Molony
Apr 18, 2026
“Claude knows who you are” by Smaug123
Apr 18, 2026
“Vladimir Putin’s CEV is probably pretty good” by habryka
Apr 18, 2026
“Post-mortem’ing my earliest ML research paper, 7 years later” by LawrenceC
Apr 18, 2026
“If You’ve Never Bought a Tool You Didn’t Need, You’re Not Buying Enough Tools” by Drake Morrison
Apr 18, 2026
“3” by AnnaJo
Apr 18, 2026
“Consent-Based RL: Letting Models Endorse Their Own Training Updates” by Logan Riggs
Apr 17, 2026
“Prompted CoT Early Exit Undermines the Monitoring Benefits of CoT Uncontrollability” by Elle Najt, Asa Cooper Stickland, Xander Davies
Apr 17, 2026
“AI #164: Pre Opus” by Zvi
Apr 17, 2026
[Linkpost] “You can only build safe ASI if ASI is globally banned” by Connor Leahy
Apr 17, 2026
“Specialization is a Driver of Natural Ontology” by johnswentworth
Apr 17, 2026
“On Dwarkesh Patel’s Podcast With Nvidia CEO Jensen Huang” by Zvi
Apr 17, 2026
“Let goodness conquer all that it can defend” by habryka
Apr 17, 2026
“Beware of Well-Written Posts” by alseph
Apr 17, 2026
“You Aren’t in Charge of the Overton Window; Politics Is Not Interior Design” by Davidmanheim
Apr 16, 2026
“Carpathia Day” by Drake Morrison
Apr 16, 2026
“Claude Code, Codex and Agentic Coding #7: Auto Mode” by Zvi
Apr 16, 2026
“Do not conquer what you cannot defend” by habryka
Apr 16, 2026
“What is the Iliad Intensive?” by Leon Lang, Alexander Gietelink Oldenziel, David Udell
Apr 16, 2026
“The Mirror Test Is Complicated” by J Bostock
Apr 15, 2026
“Contra Leicht on AI Pauses” by David Scott Krueger (formerly: capybaralet)
Apr 15, 2026
“Nectome: All That I Know” by Raelifin
Apr 15, 2026
“Effective Altruism, Seen From Slytherin” by Xylix
Apr 15, 2026
“Majority Report” by peralice
Apr 15, 2026
“Current AIs seem pretty misaligned to me” by ryan_greenblatt
Apr 15, 2026
“Contra Byrnes on UV & Cancer” by HedonicEscalator
Apr 15, 2026
“Everyone Has a Plan Until They Get Social Pressure To the Face” by Czynski
Apr 15, 2026
“Mechanisms of Introspective Awareness” by Uzay Macar
Apr 14, 2026
“Claude Mythos #3: Capabilities and Additions” by Zvi
Apr 14, 2026
“Load-Bearing Sincerity: On the Motive Reinforcement Thesis” by Fiora Starlight
Apr 14, 2026
“Diary of a “Doomer”: 12+ years arguing about AI risk (part 1)” by David Scott Krueger (formerly: capybaralet)
Apr 14, 2026
“A Retrospective of Richard Ngo’s 2022 List of Conceptual Alignment Projects” by LawrenceC
Apr 14, 2026
“From personas to intentions: towards a science of motivations for AI models” by David Africa, Jacob Pfau
Apr 14, 2026
“The Shapley Share of Responsibility?” by Raemon
Apr 14, 2026
“Who Killed Common Law?” by Benquo
Apr 14, 2026
“Meaningful Questions Have Return Types” by Drake Morrison
Apr 14, 2026
“Anthropic repeatedly accidentally trained against the CoT, demonstrating inadequate processes” by Alex Mallen, ryan_greenblatt
Apr 14, 2026
“Political Violence Is Never Acceptable” by Zvi
Apr 13, 2026
“Only Law Can Prevent Extinction” by Eliezer Yudkowsky
Apr 13, 2026
“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, agucova, Alexandra Bates, Parv Mahajan
Apr 13, 2026
“Tomas Bjartur: The Last Prodigy” by Linch
Apr 13, 2026
“Annoyingly Principled People, and what befalls them” by Raemon
Apr 13, 2026
“TAPs or it didn’t happen” by Raemon
Apr 13, 2026
“Returns to intelligence” by RobertM
Apr 13, 2026
“Daycare illnesses” by Nina Panickssery
Apr 13, 2026
“Talk English, Think Something Else” by J Bostock
Apr 13, 2026
“The policy surrounding Mythos marks an irreversible power shift” by sil
Apr 13, 2026
“Sparse Autoencoders for Single-Cell Models” by Ihor Kendiukhov
Apr 13, 2026
“Eggs, rooms, puzzles, and talking about AI” by KatjaGrace
Apr 13, 2026
“Morale” by J Bostock
Apr 12, 2026
“Your Mom is a Chimera” by michaelwaves
Apr 12, 2026
“The Blast Radius Principle” by Martin Sustrik
Apr 12, 2026
“How to make good tea” by RobertM
Apr 12, 2026
“Catching illicit distributed training operations during an AI pause” by Robi Rahman
Apr 12, 2026
[Linkpost] “Scott Alexander gentrified my meetup” by dominicq
Apr 11, 2026
“Pausing AI Is the Best Answer to Post-Alignment Problems” by MichaelDickens
Apr 11, 2026
“Some thoughts on Nectome’s risk and resilience” by Aurelia
Apr 11, 2026
“Chocolate Sloths, Tinder, and Moral Backstops” by J Bostock
Apr 11, 2026
“Dario probably doesn’t believe in superintelligence” by RobertM
Apr 11, 2026
“The Unintelligibility is Ours: Notes on Chain-of-Thought” by 1a3orn
Apr 11, 2026
“If Mythos actually made Anthropic employees 4x more productive, I would radically shorten my timelines” by ryan_greenblatt
Apr 11, 2026
“Claude Mythos #2: Cybersecurity and Project Glasswing” by Zvi
Apr 10, 2026
“Why Control Creates Conflict, and When to Open Instead” by plex
Apr 10, 2026
“Reproducing steering against evaluation awareness in a large open-weight model” by Thomas Read, Bronson Schoen, Joseph Bloom
Apr 10, 2026
“Have we already lost? Part 2: Reasons for Doom” by LawrenceC
Apr 10, 2026
“Model organisms researchers should check whether high LRs defeat their model organisms” by dx26, Sebastian Prasanna, Alek Westover, Vivek Hebbar, Julian Stastny
Apr 10, 2026
“Anthropic did not publish a “risk discussion” of Mythos when required by their RSP” by RobertM
Apr 10, 2026
“Claude Mythos: The System Card” by Zvi
Apr 10, 2026
“Some takes on UV & cancer” by Steven Byrnes
Apr 10, 2026
“AI #163: Mythos Quest” by Zvi
Apr 09, 2026
“Slightly-Super Persuasion Will Do” by Tomás B.
Apr 09, 2026
“Help me launch Obsolete: a book aimed at building a new movement for AI reform” by garrison
Apr 09, 2026
“Have we already lost? Part 1: The Plan in 2024” by LawrenceC
Apr 09, 2026
“Do not be surprised if LessWrong gets hacked” by RobertM
Apr 09, 2026
“One Week in the Rat Farm” by Philip Harker
Apr 09, 2026
“101 Humans of New York on the Risks of AI” by Corm
Apr 09, 2026
“Baking tips” by RobertM
Apr 08, 2026
“An easy coordination problem?” by KatjaGrace
Apr 08, 2026
“Excerpts and Notes on Mythos Model Card” by williawa
Apr 08, 2026
“The effects of caffeine consumption do not decay with a ~5 hour half-life” by kman
Apr 08, 2026
“You don’t know what you are made of till you’ve been stalked across three countries” by Shoshannah Tekofsky
Apr 08, 2026
“Why is Flesh So Weak?” by J Bostock
Apr 08, 2026
“The hard part isn’t noticing when papers are bad, it’s deciding what to do afterwards” by LawrenceC
Apr 08, 2026
“We can prevent progress! Conceptual clarity, and inspiration from the FDA” by KatjaGrace
Apr 08, 2026
“AI as a Trojan horse race” by KatjaGrace
Apr 08, 2026
“My unsupervised elicitation challenge” by DanielFilan
Apr 08, 2026
“Role-playing vs Self-modelling” by Jan_Kulveit
Apr 08, 2026
“Elementary Condensation” by Jan
Apr 08, 2026
“Hedging and Survival-Weighted Planning” by Vaniver
Apr 08, 2026
“Opus’s Schelling Steganography Has Amplifiable Secrecy Against Weaker Eavesdroppers” by Elle Najt
Apr 08, 2026
“An Alignment Journal: Features and policies” by JessRiedel, Dan MacKinlay, Luca, Daniel Murfet, david reinstein
Apr 08, 2026
[Linkpost] “Questions raised about OpenAI leaders’ trustworthiness by the New Yorker” by Remmelt
Apr 07, 2026
“Fantasy ideology” by Ninety-Three
Apr 07, 2026
“Claude Mythos System Card Preview” by anaguma
Apr 07, 2026
“My picture of the present in AI” by ryan_greenblatt
Apr 07, 2026
[Linkpost] ”[Paper] Stringological sequence prediction I” by Vanessa Kosoy
Apr 07, 2026
“We’re actually running out of benchmarks to upper bound AI capabilities” by LawrenceC
Apr 07, 2026
“Don’t write for LLMs, just record everything” by RobertM
Apr 07, 2026
“Contra Nina Panickssery on advice for children” by Sean Herrington
Apr 07, 2026
“By Strong Default, ASI Will End Liberal Democracy” by MichaelDickens
Apr 07, 2026
“AIs can now often do massive easy-to-verify SWE tasks and I’ve updated towards shorter timelines” by ryan_greenblatt
Apr 06, 2026
“Paper close reading: “Why Language Models Hallucinate”” by LawrenceC
Apr 06, 2026
“Ten different ways of thinking about Gradual Disempowerment” by David Scott Krueger (formerly: capybaralet)
Apr 05, 2026
“11 pieces of advice for children” by Nina Panickssery
Apr 05, 2026
“Steering Might Stop Working Soon” by J Bostock
Apr 05, 2026
“Am I the baddie?” by Ustice
Apr 05, 2026
“Academic Proof-of-Work in the Age of LLMs” by LawrenceC
Apr 05, 2026
“Positive sum does not mean “win-win”” by loops
Apr 05, 2026
“Considerations for growing the pie” by Zach Stein-Perlman
Apr 05, 2026
″“Following the incentives”” by David Scott Krueger (formerly: capybaralet)
Apr 04, 2026
“Chicken-Free Egg Whites” by jefftk
Apr 04, 2026
“dark ilan” by ozymandias
Apr 04, 2026
“Mean field sequence: an introduction” by Dmitry Vaintrob, Lauren Greenspan
Apr 04, 2026
“Democracy Dies With The Rifleman” by Vaniver
Apr 04, 2026
“The bar is lower than you think” by XelaP
Apr 04, 2026
“Anthropic Responsible Scaling Policy v3: Dive Into The Details” by Zvi
Apr 04, 2026
“Did Anyone Predict the Industrial Revolution?” by Lost Futures
Apr 04, 2026
“Why do I believe preserving structure is enough?” by Aurelia
Apr 04, 2026
“There should be $100M grants to automate AI safety” by Marius Hobbhahn
Apr 03, 2026
“Sadly, The Whispering Earring” by Dentosal
Apr 03, 2026
“Common research advice #2: say precisely what you want to say” by LawrenceC
Apr 03, 2026
“AI #162: Visions of Mythos” by Zvi
Apr 03, 2026
“2026: The year of throwing my agency at my health (now with added cyborgism)” by Ruby
Apr 03, 2026
[Linkpost] “Q1 2026 Timelines Update” by Daniel Kokotajlo, elifland, bhalstead
Apr 03, 2026
“How social ideas get corrupt” by Kaj_Sotala
Apr 02, 2026
“The Indestructible Future” by WillPetillo
Apr 02, 2026
“My most common advice for junior researchers” by LawrenceC
Apr 02, 2026
“The Practical Guide to Superbabies” by GeneSmith
Apr 02, 2026
“The Corner-Stone” by Benquo
Apr 02, 2026
“Systematically dismantle the AI compute supply chain.” by David Scott Krueger (formerly: capybaralet)
Apr 02, 2026