Poster
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning
Towards A Unified Neural Architecture for Visual Recognition and Reasoning
Calvin Luo · Boqing Gong · Ting Chen · Chen Sun
Recognition and reasoning are two pillars of visual understanding. However, these tasks have an imbalance in focus; whereas recent advances in neural networks have shown strong empirical performance in visual recognition, there has been comparably much less success in solving visual reasoning. Intuitively, unifying these two tasks under a singular framework is desirable, as they are mutually dependent and beneficial. Motivated by the recent success of multi-task transformers for visual recognition and language understanding, we propose a unified neural architecture for visual recognition and reasoning tasks with a generic interface (e.g., tokens) for all tasks. Our framework enables the principled investigation of how different visual recognition tasks, datasets, and inductive biases can help enable spatiotemporal reasoning capabilities. Noticeably, we find that object detection, which requires spatial localization of individual objects, is the most beneficial recognition task for reasoning. We further demonstrate via probing that implicit object-centric representations emerge automatically inside our framework. We also discover that visual reasoning and object detection respond to drastically different model components; certain architectural choices such as the backbone model of the visual encoder have a significant impact on visual reasoning, but little on object detection. Given the results of our experiments, we believe that a fruitful direction forward is to consider visual reasoning a first-class citizen alongside visual recognition, as they are strongly correlated but benefit from potentially different design choices.