Oral
Decoupling Gradient-Like Learning Rules from Representations
Philip Thomas · Christoph Dann · Emma Brunskill

Thu Jul 12th 05:30 -- 05:40 PM @ A1

In machine learning, learning often corresponds to changing the parameters of a parameterized function. A learning rule is an algorithm or mathematical expression that specifies precisely how the parameters should be changed. When creating a machine learning system, we must make two decisions: what representation should be used (i.e., what parameterized function should be used) and what learning rule should be used to search through the resulting set of representable functions. In this paper we focus on gradient-like learning rules, wherein these two decisions are coupled in a subtle (and often unintentional) way. Using most learning rules, these two decisions are coupled in a subtle (and often unintentional) way. That is, using the same learning rule with two different representations that can represent the same sets of functions can result in two different outcomes. After arguing that this coupling is undesirable, particularly when using neural networks, we present a method for partially decoupling these two decisions for a broad class of gradient-like learning rules that span unsupervised learning, reinforcement learning, and supervised learning.

Author Information

Philip Thomas (University of Massachusetts Amherst)
Christoph Dann (Carnegie Mellon University)
Emma Brunskill (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors