Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Reinforcement Learning for Real Life

Designing Interpretable Approximations to Deep Reinforcement Learning

Nathan Dahlin · Rahul Jain · Pierluigi Nuzzo · Krishna Kalagarla · Nikhil Naik


Abstract:

In an ever expanding set of research and application areas, deep neural networks (DNNs) set the bar for algorithm performance. However, depend-ing upon additional constraints such as processing power and execution time limits, or requirements such as verifiable safety guarantees, it may not be feasible to actually use such high-performingDNNs in practice. Many techniques have been developed in recent years to compress or distill complex DNNs into smaller, faster or more understandable models and controllers. This work seeks to identify reduced models that not only preserve a desired performance level, but also, for example, succinctly explain the latent knowledge represented by a DNN. We illustrate the effective-ness of the proposed approach on the evaluation of decision tree variants and kernel machines in the context of benchmark reinforcement learning tasks.

Chat is not available.