Timezone: »
Current reinforcement learning (RL) methods use simulation models as simple black-box oracles. In this paper, with the goal of improving the performance exhibited by RL algorithms, we explore a systematic way of leveraging the additional information provided by an emerging class of differentiable simulators. Building on concepts established by Deterministic Policy Gradients (DPG) methods, the neural network policies learned with our approach represent deterministic actions. In a departure from standard methodologies, however, learning these policies does not hinge on approximations of the value function that must be learned concurrently in an actor-critic fashion. Instead, we exploit differentiable simulators to directly compute the analytic gradient of a policy's value function with respect to the actions it outputs. This, in turn, allows us to efficiently perform locally optimal policy improvement iterations. Compared against other state-of-the-art RL methods, we show that with minimal hyper-parameter tuning our approach consistently leads to better asymptotic behavior across a set of payload manipulation tasks that demand a high degree of accuracy and precision.
Author Information
Miguel Angel Zamora Mora (ETH Zurich)
Momchil Peychev (ETH Zurich)
Sehoon Ha (Georgia Institute of Technology)
Martin Vechev (ETH Zurich)
Stelian Coros (ETH Zurich)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: PODS: Policy Optimization via Differentiable Simulation »
Tue. Jul 20th 02:25 -- 02:30 PM Room
More from the Same Authors
-
2021 : Automated Discovery of Adaptive Attacks on Adversarial Defenses »
Chengyuan Yao · Pavol Bielik · Petar Tsankov · Martin Vechev -
2022 Workshop: Workshop on Formal Verification of Machine Learning »
Huan Zhang · Leslie Rice · Kaidi Xu · aditi raghunathan · Wan-Yi Lin · Cho-Jui Hsieh · Clark Barrett · Martin Vechev · Zico Kolter -
2022 Poster: On Distribution Shift in Learning-based Bug Detectors »
Jingxuan He · Luca Beurer-Kellner · Martin Vechev -
2022 Spotlight: On Distribution Shift in Learning-based Bug Detectors »
Jingxuan He · Luca Beurer-Kellner · Martin Vechev -
2021 Poster: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Poster: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2021 Spotlight: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Spotlight: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2020 Poster: Adversarial Robustness for Code »
Pavol Bielik · Martin Vechev -
2020 Poster: Adversarial Attacks on Probabilistic Autoregressive Forecasting Models »
RaphaĆ«l Dang-Nhu · Gagandeep Singh · Pavol Bielik · Martin Vechev -
2019 Poster: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2019 Oral: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2018 Poster: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Oral: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Poster: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev -
2018 Oral: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev