Generally Intelligent

By Kanjun Qiu

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store and Apple App Store.

Image by Kanjun Qiu

Category: Technology

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 11
Reviews: 0
Episodes: 35

Description

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.

Episode Date
Episode 35: Percy Liang, Stanford: On the paradigm shift and societal effects of foundation models
May 09, 2024
Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI
Mar 12, 2024
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Aug 09, 2023
Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize
Jun 22, 2023
Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition
Mar 29, 2023
Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms
Mar 23, 2023
Episode 29: Jim Fan, NVIDIA, on foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant
Mar 09, 2023
Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
Mar 01, 2023
Episode 27: Noam Brown, FAIR, on achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time
Feb 09, 2023
Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory
Jan 17, 2023
Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress
Dec 16, 2022
Episode 24: Jack Parker-Holder, DeepMind, on open-endedness, evolving agents and environments, online adaptation, and offline learning
Dec 06, 2022
Episode 23: Celeste Kidd, UC Berkeley, on attention and curiosity, how we form beliefs, and where certainty comes from
Nov 22, 2022
Episode 22: Archit Sharma, Stanford, on unsupervised and autonomous reinforcement learning
Nov 17, 2022
Episode 21: Chelsea Finn, Stanford, on the biggest bottlenecks in robotics and reinforcement learning
Nov 03, 2022
Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting
Oct 14, 2022
Episode 19: Minqi Jiang, UCL, on environment and curriculum design for general RL agents
Jul 19, 2022
Episode 18: Oleh Rybkin, UPenn, on exploration and planning with world models
Jul 11, 2022
Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology
Feb 28, 2022
Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity
Dec 21, 2021
Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory
Oct 15, 2021
Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement
Sep 24, 2021
Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning
Sep 10, 2021
Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement
Jun 18, 2021
Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI
May 20, 2021
Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI
May 12, 2021
Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization
Apr 02, 2021
Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations
Mar 27, 2021
Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models
Mar 18, 2021
Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions
Mar 05, 2021
Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding
Feb 24, 2021
Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning
Feb 17, 2021
Episode 03: Cinjon Resnick, NYU, on activity and scene understanding
Feb 01, 2021
Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process
Jan 07, 2021
Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems
Dec 15, 2020