Skip to yearly menu bar Skip to main content


Poster

End-to-end Active Object Tracking via Reinforcement Learning

Wenhan Luo · Peng Sun · Fangwei Zhong · Wei Liu · Tong Zhang · Yizhou Wang

Hall B #64

Abstract:

We study active object tracking, where a tracker takes as input the visual observation (\ie, frame sequence) and produces the camera control signal (\eg, move forward, turn left, \etc). Conventional methods tackle the tracking and the camera control separately, which is challenging to tune jointly. It also incurs many human efforts for labeling and many expensive trial-and-errors in real-world. To address these issues, we propose, in this paper, an end-to-end solution via deep reinforcement learning, where a ConvNet-LSTM function approximator is adopted for the direct frame-to-action prediction. We further propose an environment augmentation technique and a customized reward function, which are crucial for a successful training. The tracker trained in simulators (ViZDoom, Unreal Engine) shows good generalization in the case of unseen object moving path, unseen object appearance, unseen background, and distracting object. It can restore tracking when occasionally losing the target. With the experiments over the VOT dataset, we also find that the tracking ability, obtained solely from simulators, can potentially transfer to real-world scenarios.

Live content is unavailable. Log in and register to view live content