Raymond Chua
Hey there and a warm welcome!
Last updated 25 Feb 2026.
I recently defended my PhD in Computer Science at McGill University, conducted at Mila – Quebec Artificial Intelligence Institute, under the supervision of Doina Precup and Blake Richards. My research bridges reinforcement learning and computational neuroscience, investigating how principles such as predictive representations and memory consolidation can inform the design of adaptive, continually learning AI systems.
My doctoral work addressed the plasticity–stability dilemma in reinforcement learning through structured predictive representations and multi-timescale learning mechanisms, demonstrating how biologically inspired approaches can reduce interference while preserving adaptability.
Building on this foundation, I am increasingly interested in mechanistic interpretability — using tools inspired by neuroscience, such as representation similarity analysis and cross-attention probing, to understand how predictive representations (e.g., successor features) encode and transform information over time. I am particularly motivated by questions at the intersection of continual learning, foundation models, and embodied decision-making systems.
Beyond research, I’m passionate about improving AI capabilities through academic–industry partnerships, where I mentor students from McGill, UdeM, and Mila as they tackle real-world challenges with companies seeking to integrate machine learning into their products and pipelines. During my free time, I enjoy pushing both my intellectual and physical abilities through triathlon, which continues to teach me about endurance, balance, and growth.
news
| Feb 20, 2026 | Excited to share that I have successfully defended my PhD thesis! Thank you to my examiners, Prof. Mark Crowley (University of Waterloo), Prof. Ross Otto (McGill) and my advisors Blake and Doina! |
|---|---|
| Jan 17, 2026 | Excited to kickstart the new year to share that our work on “Do Successor Features resemble Hippocampal Place Cells?” has been accepted at Cosyne 2026! See you all in Portugal! 🇵🇹🧠🚀 |
| Nov 24, 2025 | Had the pleasure of giving an invited talk at Graphcore in London, where I presented my work on brain-inspired continual reinforcement learning. Great conversations with the team about bridging neuroscience and ML, and exciting to see the strong research culture there. |
| Nov 20, 2025 | Visited Prof. Claudia Clopath’s lab at Imperial College London, where I gave a talk on our work titled “Brain-Inspired Continual Reinforcement Learning Agents.” I have met Claudia numerous times throughout my PhD journey, from Cosyne to giving a talk at our Biological and Artificial RL workshop at NeurIPS in 2020. It is my great honor to be able to spend time with her lab and learned about what they are working on as well! |
| Nov 13, 2025 | Visited Prof. Rui Ponte Costa’s lab at University of Oxford, where I gave a talk on our work titled “Brain-Inspired Continual Reinforcement Learning Agents.” Rui was my master thesis supervisor, and it was wonderful to reconnect with him and his lab again! |
selected publications
- NeurIPSDeep RL Needs Deep Behavior Analysis: Exploring Implicit Planning by Model-Free Agents in Open-Ended EnvironmentsProceedings of the 39th Conference on Neural Information Processing Systems (NeurIPS), 2025This is my first collaboration work with members of Prof. Kanaka Rajan’s lab. Riley and Ryan are first authors, and Prof. Kanaka Rajan is the corresponding author.