Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Generative Design of Decision Tree Policies for Reinforcement Learning
Jacob Pettit · Chak Shing Lee · Jiachen Yang · Alex Ho · Daniel Faissol · Brenden Petersen · Mikel Landajuela
Keywords: [ discrete optimization ] [ Hybrid Optimization ] [ Reinforcement Learning ] [ Decision Trees ] [ Generative Design ] [ Deep Symbolic Optimization ] [ Interpretable Machine Learning ]
Decision trees are an attractive choice for modeling policies in control environments due to their interpretability, conciseness, and ease of implementation. However, generating performant decision trees in this context has several challenges, including the hybrid discrete-continuous nature of the search space,the variable-length nature of the trees, the existence of parent-dependent constraints,and the high computational cost of evaluating the objective function in reinforcement learning settings.Traditional methods, such as Mixed Integer Programming or Mixed Bayesian Optimization, are unsuitable for these problems due to the variable-length constrained search spaceand the high number of objective function evaluations required. To address these challenges,we propose to extend approaches in the field of neural combinatorial optimizationto handle the hybrid discrete-continuous optimization problem of generating decision trees.Our approach demonstrates significant improvements in performance and sample efficiencyover the state-of-the-art methods for interpretable reinforcement learning with decision trees.