Sean Welleck's Home Page
Home | Podcast
The Thesis Review Podcast

E-mail: thesisreviewpodcast {@ | at} gmail.com
Links:
[apple podcasts]
[spotify]
[soundcloud]
[twitter]
Support: [patreon]
[buy me a coffee]
Each episode of The Thesis Review is a conversation centered around a researcher's PhD thesis, giving insight into their history, revisiting older ideas, and providing a valuable perspective on how their research has evolved (or stayed the same) since.
Episodes
- [42] Charles Sutton | Efficient Training Methods for Conditional Random Fields
[episode notes]
- [41] Talia Ringer | Proof Repair
[episode notes]
- [40] Lisa Lee | Learning Embodied Agents with Scalably-Supervised Reinforcement Learning
[episode notes]
- [39] Burr Settles | Curious Machines: Active Learning with Structured Instances
[episode notes]
- [38] Andrew Lampinen | A Computational Framework for Learning and Transforming Task Representations
[episode notes]
- [37] Joonkoo Park | Experiential Effects on the Neural Substrates of Visual Word and Number Processing
[episode notes]
- [36] Dieuwke Hupkes | Hierarchy and interpretability in neural models of language processing
[episode notes]
- [35] Armando Solar-Lezama | Program Synthesis by Sketching
[episode notes]
- [34] Sasha Rush | Lagrangian Relaxation for Natural Language Decoding
[episode notes]
- [33] Michael R. Douglas | G/H Conformal Field Theory
[episode notes]
- [32] Andre Martins | The Geometry of Constrained Structured Prediction
[episode notes]
- [31] Jay McClelland | Preliminary Letter Identification in the Perception of Words and Nonwords
[episode notes]
- [30] Dustin Tran | Probabilistic Programming for Deep Learning
[episode notes]
- [29] Tengyu Ma | Non-convex Optimization for Machine Learning: Design, Analysis, and Understanding
[episode notes]
- [28] Karen Ullrich | A Coding Perspective on Deep Latent Variable Models
[episode notes]
- [27] Danqi Chen | Neural Reading Comprehension and Beyond
[episode notes]
- [26] Kevin Ellis | Algorithms for Learning to Induce Programs
[episode notes]
- [25] Tomas Mikolov | Statistical Language Models Based on Neural Networks
[episode notes]
- [24] Martin Arjovsky | Out of Distribution Generalization in Machine Learning
[episode notes]
- [23] Simon Du | Gradient Descent for Non-convex Problems in Modern Machine Learning
[episode notes]
- [22] Graham Neubig | Unsupervised Learning of Lexical Information for Language Processing Systems
[episode notes]
- [21] Michela Paganini | Machine Learning Solutions for High Energy Physics
[episode notes]
- [20] Josef Urban | Exploring and Combining Deductive and Inductive Reasoning in Large Libraries of Formalized Mathematics
[episode notes]
- [19] Dumitru Erhan | Understanding Deep Architectures and the Effect of Unsupervised Pretraining
[episode notes]
- [18] Eero Simoncelli | Distributed Representation & Analysis of Visual Motion
[episode notes]
- [17] Paul Middlebrooks | Neuronal Correlates of Metacognition in Primate Frontal Cortex
[episode notes]
- [16] Aaron Courville | A Latent Cause Theory of Classical Conditioning
[episode notes]
- [15] Christian Szegedy | Some Applications of the Weighted Combinatorial Laplacian
[episode notes]
- [14] Been Kim | Interactive and Interpretable Machine Learning Models for Human Machine Collaboration
[episode notes]
- [13] Adji Bousso Dieng | Deep Probabilistic Graphical Modeling
[episode notes]
- [12] Martha White | Regularized Factor Models
[episode notes]
- [11] Jacob Andreas | Learning from Language
[episode notes]
- [10] Chelsea Finn | Learning to Learn with Gradients
[episode notes]
- [09] Kenneth Stanley | Efficient Evolution of Neural Networks through Complexification
[episode notes]
- [08] He He | Sequential Decisions and Predictions in Natural Language Processing
[episode notes]
- [07] John Schulman | Optimizing Expectations: From Deep RL to Stochastic Computation Graphs
[episode notes]
- [06] Yoon Kim | Deep Latent Variable Models of Natural Language
[episode notes]
- [05] Julian Togelius | Optimization, Imitation, and Innovation: Computational Intelligence and Games
[episode notes]
- [04] Sebastian Nowozin | Learning with Structured Data: Applications to Computer Vision
[episode notes]
- [03] Sebastian Ruder | Neural Transfer Learning for Natural Language Processing
[episode notes]
- [02] Colin Raffel | Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching
[episode notes]
- [01] Gus Xia | Expressive Collaborative Music Performance via Machine Learning
[episode notes]
- [00] Introduction | The Thesis Review Podcast
[episode notes]