PyTorch Developer Podcast

By Edward Yang, Team PyTorch

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store and Apple App Store.


Category: Technology

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 17
Reviews: 0
Episodes: 82

Description

The PyTorch Developer Podcast is a place for the PyTorch dev team to do bite sized (10-20 min) topics about all sorts of internal development topics in PyTorch.

Episode Date
TORCH_TRACE and tlparse
Apr 29, 2024
Higher order operators
Apr 21, 2024
Inductor - Post-grad FX passes
Apr 12, 2024
CUDA graph trees
Mar 24, 2024
Min-cut partitioner
Mar 17, 2024
AOTInductor
Mar 02, 2024
Tensor subclasses and PT2
Feb 24, 2024
Compiled autograd
Feb 19, 2024
PT2 extension points
Feb 05, 2024
Inductor - Define-by-run IR
Jan 24, 2024
Unsigned integers
Jan 17, 2024
Inductor - IR
Jan 16, 2024
Dynamo - VariableTracker
Jan 12, 2024
Unbacked SymInts
Feb 21, 2023
Zero-one specialization
Feb 20, 2023
torchdynamo
Dec 06, 2022
PyTorch 2.0
Dec 04, 2022
History of functorch
Nov 07, 2022
Learning rate schedulers
Jun 13, 2022
Weak references
Jun 06, 2022
Strides
May 30, 2022
AOTAutograd
May 09, 2022
Dispatcher questions with Sherlock
May 02, 2022
New CI
Apr 25, 2022
Python exceptions
Apr 17, 2022
Torch vs ATen APIs
Apr 11, 2022
All about NVIDIA GPUs
Sep 24, 2021
Tensor subclasses and Liskov substitution principle
Sep 16, 2021
Half precision
Sep 10, 2021
DataLoader with multiple workers leaks memory
Sep 01, 2021
Batching
Aug 18, 2021
Multiple dispatch in __torch_function__
Aug 10, 2021
Multithreading
Aug 03, 2021
Asynchronous versus synchronous execution
Jul 27, 2021
gradcheck
Jul 23, 2021
torch.use_deterministic_algorithms
Jul 21, 2021
Reference counting
Jul 20, 2021
Memory layout
Jul 13, 2021
pytorch-probot
Jul 12, 2021
API design via lexical and dynamic scoping
Jul 09, 2021
Intro to distributed
Jul 08, 2021
Double backwards
Jul 07, 2021
Functional modules
Jul 06, 2021
CUDA graphs
Jun 28, 2021
Default arguments
Jun 25, 2021
Anatomy of a domain library
Jun 24, 2021
TensorAccessor
Jun 23, 2021
Random number generators
Jun 22, 2021
vmap
Jun 21, 2021
Expect tests
Jun 18, 2021
XLA
Jun 17, 2021
TH
Jun 16, 2021
TorchScript
Jun 15, 2021
CMake
Jun 14, 2021
torchdeploy
Jun 11, 2021
C++ frontend
Jun 10, 2021
PyObject preservation
Jun 09, 2021
Mobile selective build
Jun 08, 2021
torch.nn
Jun 07, 2021
Code generation
Jun 04, 2021
Why is autograd so complicated
Jun 03, 2021
__torch_function__
Jun 02, 2021
TensorIterator
Jun 01, 2021
native_functions.yaml
May 28, 2021
Serialization
May 27, 2021
Continuous integration
May 26, 2021
Stacked diffs and ghstack
May 25, 2021
Shared memory
May 24, 2021
Automatic mixed precision
May 21, 2021
Conjugate views
May 20, 2021
History and constraints of Tensor
May 19, 2021
How new operators are authored
May 18, 2021
The life and death of Variable
May 17, 2021
Backend extensibility
May 14, 2021
The road to structured kernels
May 13, 2021
Functionalization
May 12, 2021
Just enough CUDA to be dangerous
May 11, 2021
Inference mode
May 10, 2021
Vectorization
May 07, 2021
Dynamic library structure
May 06, 2021
History and constraints of the dispatcher
May 05, 2021
Binding C++ objects to Python
May 04, 2021