I am an assistant professor of computer science in the Courant Institute at New York University. I am recruiting PhD students—if you are interested in working on parallel programming languages, compilers and run-time systems, and/or parallel algorithms, please reach out!
My research focuses on provably efficient implementations of high-level programming languages, especially for parallel programming. The goal is to make it simpler and safer to develop parallel software by providing strong guarantees on not only safety, but also performance. Two examples from my work include (1) provably efficient parallel garbage collection based on disentanglement, and (2) provably efficient automatic granularity control.
Previously, I was a post-doc at Carnegie Mellon University, working with Umut Acar. I completed my PhD at Carnegie Mellon in 2022.
Check out my blog, and come find me elsewhere on the web: bsky, mastodon, github, twitter.
News
- (Dec 12, 2024) I will be giving at tutorial at POPL'25! Come join us on Sun, Jan 19 for a hands-on dive into MPL: Provably Efficient Parallel Programming. Two sessions; three hours total. A short description of the tutorial is available here.
- (Sep 16, 2024) GraFeyn: Efficient Parallel Sparse Simulation of Quantum Circuits is a Best Paper at QCE'24!
- (Jan 1, 2024) Automatic Parallelism Management is a Distinguished Paper at POPL'24!
- (Nov 17, 2023) We have two papers accepted at POPL'24! See paper list below.
- (June 21, 2023) I am the 2023 recipient of the SIGPLAN Reynolds Doctoral Dissertation Award.
MaPLe (MPL)
I lead development of MaPLe (MPL for short), an efficient and scalable parallel functional programming language. MPL offers excellent multicore performance through a combination of provably efficient parallel garbage collection and automatic granularity control. Across a wide range of parallel algorithms and benchmarks (see our benchmark suite), we have shown that MPL can compete with languages such as C/C++ in terms of time and space efficiency.
As a functional language, MPL makes it easy to avoid data races and unintended race conditions. MPL is currently being used at Carnegie Mellon University to help teach parallel programming to over 500 students each year.
MPL is open-source! Check out the project on GitHub or try it out now with Docker:
$ docker pull shwestrick/mpl
$ docker run -it shwestrick/mpl /bin/bash
...# examples/bin/primes @mpl procs 4 --
Ph.D. Thesis
(John C. Reynolds Doctoral Dissertation Award)
Efficient and Scalable Parallel Functional Programming Through Disentanglement
Carnegie Mellon University, August 2022.
[pdf]
[abstract]
[award info]
Publications
(Best Paper Award)
GraFeyn: Efficient Parallel Sparse Simulation of Quantum Circuits
QCE 2024.
[pdf]
[abstract]
(Distinguished Paper)
Automatic Parallelism Management
POPL 2024.
[pdf]
[abstract]
[dl.acm]
DisLog: A Separation Logic for Disentanglement
POPL 2024.
[pdf]
[abstract]
[dl.acm]
Efficient Parallel Functional Programming with Effects
PLDI 2023.
[pdf]
[abstract]
[dl.acm]
WARDen: Specializing Cache Coherence for High-Level Parallel Languages
CGO 2023.
[abstract]
[dl.acm]
(Distinguished Paper)
Entanglement Detection With Near-Zero Cost
ICFP 2022.
[pdf]
[abstract]
[dl.acm]
Parallel Block-Delayed Sequences
PPoPP 2022.
[pdf]
[abstract]
[dl.acm]
(Distinguished Paper)
Provably Space-Efficient Parallel Functional Programming
POPL 2021.
[pdf]
[abstract]
[dl.acm]
Parallel Batch-Dynamic Trees via Change Propagation
ESA 2020.
[pdf]
[abstract]
[drops.dagstuhl]
Disentanglement in Nested-Parallel Programs
POPL 2020.
[pdf]
[abstract]
[dl.acm]
Fairness in Responsive Parallelism
ICFP 2019.
[pdf]
[abstract]
[dl.acm]
Hierarchical Memory Management for Mutable State
PPoPP 2018.
[pdf]
[abstract]
[dl.acm]
[arxiv]
Brief Announcement: Parallel Dynamic Tree Contraction via
Self-Adjusting Computation
SPAA 2017.
[abstract]
[dl.acm]
Preprints
DePa: Simple, Provably Efficient, and Practical Order Maintenance
for Task Parallelism
2022.
[abstract]
[arxiv]
Talks
-
GraFeyn: Efficient Parallel Sparse Simulation of Quantum Circuits
@ QCE, Montreal, September 2024
[slides] -
Automatic Parallelism Management
@ POPL, London, January 2024
[video] [slides] -
How to Thrive as a PhD Student
@ PLMW, Seattle, September 2023
@ PLMW, Ljubljana, September 2022
[slides] -
(Keynote)
Efficient and Scalable Parallel Functional Programming Through Disentanglement
@ ML Workshop, September 2022
[video] [abstract] [slides] -
Entanglement Detection With Near-Zero Cost
@ ICFP, September 2022
[video] [abstract] [slides] -
Parallel Block-Delayed Sequences
@ PPoPP, April 2022
[video] [abstract] [slides] -
Efficient and Scalable Parallel Functional Programming Through
Disentanglement
@ Stanford, March 2022
@ Cornell, April 2022
[abstract] [slides] -
Disentanglement: Provably Efficient Parallel Functional Programming
@ MIT Fast Code Seminar, March 2021
[video] [abstract] [slides] -
Disentanglement in Nested-Parallel Programs
@ POPL, January 2020
[video] [abstract] [slides] -
Efficient Parallel Functional Programming with
Hierarchical Memory Management
@ RIT, June 2019
[abstract] [slides] -
Brief Announcement: Parallel Dynamic Tree Contraction via
Self-Adjusting Computation
@ SPAA, July 2017
[abstract] [slides]
Teaching
At NYU, in Spring 2025, I will be teaching a new course:
I was a teaching assistant for the following courses at Carnegie Mellon:
-
15-210: Parallel and Sequential Data Structures and Algorithms
(Semesters S20, S19, F18, S16, F15, S15, F14, S14, F13) -
15-122: Principles of Imperative Computation
(Summer Sessions N14, M14, N13, M13)
Mentoring and Advising
I am a SIGPLAN-M mentor. I have also been a mentor and advisor for both undergraduate and graduate-level research:
- 2024-present: Karan Kumar Gangadhar, master's independent study
- 2019-2020: Lawrence Wang, undergraduate research
-
2018-2019: Rohan Yadav, undergraduate thesis
(now PhD candidate at Stanford) -
2018-2020: Yue Yao, master's thesis
(now PhD student at CMU) -
2018: Yifan Qiao, CMU Summer research intern
(now PhD student at UCLA)
Professional Service
- ICFP Publicity Chair 2024-Present
- PLDI 2025: program committee member
- SPAA 2025: PL area chair
- ASPLOS 2025: program committee member
- POPL SRC 2025: selection committee member
- PADL 2025: program committee member
- TQC: reviewer (2024)
- TOPC: reviewer (2024)
- ML Family Workshop (@ICFP) 2024: program committee member
- SPAA 2024: invited sub-reviewer
- PLDI 2024: artifact evaluation committee member
- PLMW (@POPL) 2024: panelist
- FHPNC Workshop (@ICFP) 2023: co-chair
- SC 2023: invited sub-reviewer
- PLDI 2023: artifact evaluation committee member
- PLDI 2023: invited sub-reviewer
Collaborators
I'm fortunate to have worked with all sorts of incredible people, including Umut Acar, Guy Blelloch, Matthew Fluet, Stefan Muller, Stephanie Balzer, Daniel Anderson, Laxman Dhulipala, Mike Rainey, Rohan Yadav, Jatin Arora, Ram Raghunathan, Adrien Guatto, Yue Yao, Lawrence Wang, Yue Niu, Peter Dinda, Nikos Hardavellas, Simone Campanoni, Mike Wilkins, Brian Suchy, Enrico Deiana, Pascal Costanza, Alexandre Moine, Yongshan Ding, Dantong Li, Sanil Rao, Troels Henriksen, Colin McDonald, Byeongjee Kang, Mingkuan Xu, Pengyu Liu, Joseph Tassarotti, Anirudh Sivaraman, Ulysses Butler, and many others.
Other
I've been playing music my whole life, and nearly ended up becoming a professional tuba player. (Here's me in 2013, not long before I switched my undergraduate major to computer science.)
Nowadays, I like to keep up with music by learning covers (e.g. Dean Town by Vulfpeck), composing (e.g. this and this), and doing other silly stuff.
I wrote the music for Maracaibo Digital, the online version of the Maracaibo board game. Some snippets: 1, 2.
I wrote some of the code (AI for bot opponents, and random board generation) for Hexicon, a mobile word-game.