Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

47 Results

<<   <   Page 1 of 4   >   >>
Workshop
DIP-RL: Demonstration-Inferred Preference Learning in Minecraft
Ellen Novoseller · Vinicius G. Goecks · David Watkins · Josh Miller · Nicholas Waytowich
Workshop
HIP-RL: Hallucinated Inputs for Preference-based Reinforcement Learning in Continuous Domains
Chen Bo Calvin Zhang · Giorgia Ramponi
Workshop
Principal-Driven Reward Design and Agent Policy Alignment via Bilevel-RL
Souradip Chakraborty · Amrit Bedi · Alec Koppel · Furong Huang · Mengdi Wang
Poster
Thu 13:30 MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL
Fei Ni · Jianye Hao · Yao Mu · Yifu Yuan · Yan Zheng · Bin Wang · Zhixuan Liang
Poster
Wed 14:00 Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings
Masatoshi Uehara · Ayush Sekhari · Jason Lee · Nathan Kallus · Wen Sun
Oral Session
Tue 20:30 Oral A1 Causal Learning, RL, Personalization
Poster
Tue 14:00 The Virtues of Laziness in Model-based RL: A Unified Objective and Algorithms
Anirudh Vemula · Yuda Song · Aarti Singh · J. Bagnell · Sanjiban Choudhury
Oral
Tue 20:54 Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL
Zakaria Mhammedi · Dylan Foster · Alexander Rakhlin
Poster
Tue 17:00 Revisiting the Linear-Programming Framework for Offline RL with General Function Approximation
Asuman Ozdaglar · Sarath Pattathil · Jiawei Zhang · Kaiqing Zhang
Oral
Tue 21:26 Efficient RL via Disentangled Environment and Agent Representations
Kevin Gmelin · Shikhar Bahl · Russell Mendonca · Deepak Pathak
Poster
Wed 14:00 Q-learning Decision Transformer: Leveraging Dynamic Programming for Conditional Sequence Modelling in Offline RL
Taku Yamagata · Ahmed Khalil · Raul Santos-Rodriguez
Poster
Thu 16:30 A Connection between One-Step RL and Critic Regularization in Reinforcement Learning
Benjamin Eysenbach · Matthieu Geist · Sergey Levine · Ruslan Salakhutdinov