Timezone: »
Object detection has recently achieved a breakthrough for removing the last one non-differentiable component in the pipeline, Non-Maximum Suppression (NMS), and building up an end-to-end system. However, what makes for its one-to-one prediction has not been well understood. In this paper, we first point out that one-to-one positive sample assignment is the key factor, while, one-to-many assignment in previous detectors causes redundant predictions in inference. Second, we surprisingly find that even training with one-to-one assignment, previous detectors still produce redundant predictions. We identify that classification cost in matching cost is the main ingredient: (1) previous detectors only consider location cost, (2) by additionally introducing classification cost, previous detectors immediately produce one-to-one prediction during inference. We introduce the concept of score gap to explore the effect of matching cost. Classification cost enlarges the score gap by choosing positive samples as those of highest score in the training iteration and reducing noisy positive samples brought by only location cost. Finally, we demonstrate the advantages of end-to-end object detection on crowded scenes.
Author Information
Peize Sun (The University of Hong Kong)
Yi Jiang (Bytedance)
Enze Xie (The University of Hong Kong)
Wenqi Shao (The Chinese University of HongKong)
Zehuan Yuan (Bytedance.Inc)
Changhu Wang (ByteDance AI Lab)
Ping Luo (The University of Hong Kong)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: What Makes for End-to-End Object Detection? »
Wed. Jul 21st 04:00 -- 06:00 AM Room
More from the Same Authors
-
2022 Poster: CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer »
Yao Mu · Shoufa Chen · Mingyu Ding · Jianyu Chen · Runjian Chen · Ping Luo -
2022 Poster: VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix »
Teng Wang · Wenhao Jiang · Zhichao Lu · Feng Zheng · Ran Cheng · chengguo yin · Ping Luo -
2022 Poster: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2022 Spotlight: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2022 Spotlight: VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix »
Teng Wang · Wenhao Jiang · Zhichao Lu · Feng Zheng · Ran Cheng · chengguo yin · Ping Luo -
2022 Spotlight: CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer »
Yao Mu · Shoufa Chen · Mingyu Ding · Jianyu Chen · Runjian Chen · Ping Luo -
2021 Poster: Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution »
zhaoyang zhang · Wenqi Shao · Jinwei Gu · Xiaogang Wang · Ping Luo -
2021 Spotlight: Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution »
zhaoyang zhang · Wenqi Shao · Jinwei Gu · Xiaogang Wang · Ping Luo -
2020 Poster: Channel Equilibrium Networks for Learning Deep Representation »
Wenqi Shao · Shitao Tang · Xingang Pan · Ping Tan · Xiaogang Wang · Ping Luo