About Me

I recently defended my PhD in Computer Science at Brown University (April 2026) in the Intelligent Robot Lab advised by George Konidaris. My dissertation, Action-driven Learning of Structured Representations for Sequential Decision Making, develops the thesis that acting agents can recover causal structure of the world that passive observation cannot. Previously, I got a Bachelor’s degree in Electronic Engineering from Universidad Simon Bolivar, Caracas, Venezuela and a Master’s degree in Computer Science from Politecnico di Milano where I was fortunate to work with Marcello Restelli and Nicola Gatti at the AIRLAB. During Summer 2021, I interned at Amazon Alexa and worked in the Dialogue Research group with Maryam Fazel-Zarandi using LLMs (Large Language Models) for semantic parsing via SFT (supervised finetuning) and RL in task-oriented dialog systems.

Contact rrs at brown dot edu — Google ScholarLinkedInGithub

Currently on the job market for Research Scientist and Postdoc positions. Available in July 2026. Open to opportunities in the US and abroad. The best way to reach me is by email: rrs at brown dot edu.

Research Interests

During my Ph.D., my research has focused on representation learning and reinforcement learning (RL), with an emphasis on developing principled approaches to state representation learning that include abstraction [absreps] and structure discovery [factoredreps] directly from high-dimensional observations. I have leveraged advances in generative modeling, contrastive learning, and energy-based modeling to implement practical algorithms for learning latent state representations.

My work also explores the intersection of natural language and RL [rlang], investigating how to communicate prior knowledge to RL agents through language. This effort led to the development of RLang, a formal language for RL that allows the communication of partial, task-specific knowledge to agents—enabling them to learn more efficiently than in tabula rasa settings. The RLang framework has inspired further research in natural language understanding and symbol grounding from an RL perspective [nl2rlang].

During my internship at Amazon Alexa, I gained hands-on experience applying large language models (LLMs) for semantic parsing in task-oriented dialogue systems, fine-tuning models via supervised learning (SFT) and reinforcement learning (RL).

Open Source

Publications

Preprints

[factoredreps] R. Rodriguez-Sanchez, C. Allen, G. Konidaris. From Pixels to Factors: Learning Independently Controllable State Variables for Reinforcement Learning. Accepted to RLC2026 [abstract] [paper]

Conferences

[skillgraphs] A. Bagaria, A. De Mello Koch, R. Rodriguez-Sanchez, S. Lobel, G. Konidaris. Intrinsically Motivated Discovery of Temporally Abstract Graph-based Models of the World. 2nd Reinforcement Learning Conference (RLC), Edmonton, Alberta, 2025. [abstract] [paper]

[absreps] R. Rodriguez-Sanchez, G. Konidaris. Learning Abstract World Models for Value-preserving Planning with Options. 1st Reinforcement Learning Conference (RLC), Amherst, MA, 2024.
[abstract] [paper] [code]

[rlang] R. Rodriguez-Sanchez*, B. Spiegel*, J. Wang, R. Patel, S. Tellex, G. Konidaris. RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents. International Conference on Machine Learning (ICML). Honolulu, Hawaii, 2023. [abstract] [paper] [RLang.ai] [RLang package]

[vitransfer] R. Rodriguez-Sanchez*, A. Tirinzoni*, M. Restelli. Transfer of Value Functions via Variational Methods. Advances in Neural Information Processing Systems (NeurIPS), Montreal, Canada, 2018. [abstract] [paper][poster][code]

Workshops


Page design based on https://ankitsultana.com