Timezone: »
Many structured prediction problems (particularly in vision and language domains) are ambiguous, with multiple outputs being 'correct' for an input - e.g. there are many ways of describing an image, multiple ways of translating a sentence; however, exhaustively annotating the applicability of all possible outputs is intractable due to exponentially large output spaces (e.g. all English sentences). In practice, these problems are cast as multi-class prediction, with the likelihood of only a sparse set of annotations being maximized- unfortunately penalizing for placing beliefs on plausible but unannotated outputs. We make and test the following hypothesis - for a given input, the annotations of its neighbors may serve as an additional supervisory signal. Specifically, we propose an objective that transfers supervision from neighboring examples. We first study the properties of our developed method in a controlled toy setup before reporting results on multi-label classification and two image-grounded sequence modeling tasks - captioning and question generation. We evaluate using standard task-specific metrics and measures of output diversity, finding consistent improvements over standard maximum likelihood training and other baselines.
Author Information
Ashwin Kalyan (Georgia Tech)
Stefan Lee (Georgia Institute of Technology)
Anitha Kannan (Curai)
Dhruv Batra (Georgia Institute of Technology / Facebook AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #60
More from the Same Authors
-
2020 : Bridging Worlds in Reinforcement Learning with Model-Advantage »
Ashwin Kalyan · Nirbhay Modhe -
2023 Poster: Adaptive Coordination in Social Embodied Rearrangement »
Andrew Szot · Unnat Jain · Zsolt Kira · Dhruv Batra · Ruta Desai · Akshara Rai -
2019 Poster: Probabilistic Neural Symbolic Models for Interpretable Visual Question Answering »
Shanmukha Ramakrishna Vedantam · Karan Desai · Stefan Lee · Marcus Rohrbach · Dhruv Batra · Devi Parikh -
2019 Poster: TarMAC: Targeted Multi-Agent Communication »
Abhishek Das · Theophile Gervet · Joshua Romoff · Dhruv Batra · Devi Parikh · Michael Rabbat · Joelle Pineau -
2019 Poster: Trainable Decoding of Sets of Sequences for Neural Sequence Models »
Ashwin Kalyan · Peter Anderson · Stefan Lee · Dhruv Batra -
2019 Oral: TarMAC: Targeted Multi-Agent Communication »
Abhishek Das · Theophile Gervet · Joshua Romoff · Dhruv Batra · Devi Parikh · Michael Rabbat · Joelle Pineau -
2019 Oral: Probabilistic Neural Symbolic Models for Interpretable Visual Question Answering »
Shanmukha Ramakrishna Vedantam · Karan Desai · Stefan Lee · Marcus Rohrbach · Dhruv Batra · Devi Parikh -
2019 Oral: Trainable Decoding of Sets of Sequences for Neural Sequence Models »
Ashwin Kalyan · Peter Anderson · Stefan Lee · Dhruv Batra -
2019 Poster: Counterfactual Visual Explanations »
Yash Goyal · Ziyan Wu · Jan Ernst · Dhruv Batra · Devi Parikh · Stefan Lee -
2019 Oral: Counterfactual Visual Explanations »
Yash Goyal · Ziyan Wu · Jan Ernst · Dhruv Batra · Devi Parikh · Stefan Lee