I am a PhD candidate and researcher at Mila - Quebec AI Institute, where I am advised by Marc G. Bellemare and Doina Precup. My core research interests lie at the intersection of the empirical science of deep learning and sequential decision-making.
At present, I’m especially excited about developing RL-inspired methods for large language models, in order to solve problems of diverse generation and adaptive computation.
My work is supported by a generous scholarship from the Fonds de recherche du Québec.
Before graduate school, I worked as a software engineer at Google in San Francisco. Prior to that, I spent four ideal years at Brown University, where I studied math and computer science, among other things. There, I was fortunate to get my start in research by working with David Abel and Michael Littman.
Controlling Large Language Model Agents with Entropic Activation Steering
Nate Rahn, Pierluca D’Oro, Marc G. Bellemare
ICML 2024 Workshop on Mechanistic Interpretability
arXiv
Policy Optimization in a Noisy Neighborhood: On Return Landscapes in Continuous Control
Nate Rahn*, Pierluca D’Oro*, Harley Wiltzer, Pierre-Luc Bacon, Marc G. Bellemare
NeurIPS 2023
arXiv
Value Preserving State-Action Abstractions
David Abel, Nate U. Rahn, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael Littman
AISTATS 2020
paper
Outside of research, I enjoy reading on a variety of topics, long human-powered journeys and other physical practices, and writing songs for guitar and voice.