About Me

I am a final year Ph.D. student in Computer Science at Brown University in the Intelligent Robot Lab advised by George Konidaris. Previously, I got a Bachelor’s degree in Electronic Engineering from Universidad Simon Bolivar, Caracas, Venezuela and a Master’s degree in Computer Science from Politecnico di Milano where I was fortunate to work with Marcello Restelli and Nicola Gatti at the AIRLAB. During Summer 2021, I interned at Amazon Alexa and worked in the Dialogue Research group with Maryam Fazel-Zarandi using LLMs (Large Language Models) for semantic parsing via SFT (supervised finetuning) and RL in task-oriented dialog systems.

Contact rrs at brown dot edu — Google ScholarLinkedInGithub

Currently in the job market for Research Scientist positions.

Research Interests

During my Ph.D., my research has focused on representation learning and reinforcement learning (RL), with an emphasis on developing principled approaches to state representation learning that include abstraction [absreps] and structure discovery [factoredreps] directly from high-dimensional observations. I have leveraged advances in generative modeling, contrastive learning, and energy-based modeling to implement practical algorithms for learning latent state representations.

My work also explores the intersection of natural language and RL [rlang], investigating how to communicate prior knowledge to RL agents through language. This effort led to the development of RLang, a formal language for RL that allows the communication of partial, task-specific knowledge to agents—enabling them to learn more efficiently than in tabula rasa settings. The RLang framework has inspired further research in natural language understanding and symbol grounding from an RL perspective [nl2rlang].

During my internship at Amazon Alexa, I gained hands-on experience applying large language models (LLMs) for semantic parsing in task-oriented dialogue systems, fine-tuning models via supervised learning (SFT) and reinforcement learning (RL).

For the community

Hopefully, this implementation can be useful for MBRL research and easier adaptation of the DreamerV3 algorithm

Publications

Preprints

[factoredreps] R. Rodriguez-Sanchez, C. Allen, G. Konidaris. From Pixels to Factors: Learning Independently Controllable State Variables for Reinforcement Learning. Under review 2025.
[abstract] [paper]

Conferences

[skillgraphs] A. Bagaria, A. De Mello Koch, R. Rodriguez-Sanchez, S. Lobel, G. Konidaris. Intrinsically Motivated Discovery of Temporally Abstract Graph-based Models of the World. 2nd Reinforcement Learning Conference (RLC), Edmonton, Alberta, 2025. [abstract] [paper]

[absreps] R. Rodriguez-Sanchez, G. Konidaris. Learning Abstract World Models for Value-preserving Planning with Options. 1st Reinforcement Learning Conference (RLC), Amherst, MA, 2024.
[abstract] [paper] [code]

[rlang] R. Rodriguez-Sanchez*, B. Spiegel*, J. Wang, R. Patel, S. Tellex, G. Konidaris. RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents. International Conference on Machine Learning (ICML). Honolulu, Hawaii, 2023. [abstract] [paper] [RLang.ai] [RLang package]

[vitransfer] R. Rodriguez-Sanchez*, A. Tirinzoni*, M. Restelli. Transfer of Value Functions via Variational Methods. Advances in Neural Information Processing Systems (NeurIPS), Montreal, Canada, 2018. [abstract] [paper][poster][code]

Workshops


Page design based on https://ankitsultana.com