P-STMO: Pre-Trained Spatial Temporal Many-to-One Model for 3D Human Pose Estimation
The referring video object segmentation task (RVOS) involves segmentation of
a text-referred object instance in the frames of a given video. Due to the
complex nature of this multimodal task, which combines text reasoning, video
understanding, instance segmentation and tracking, existing approaches
typically rely on sophisticated pipelines in order to tackle it. In this paper,
we propose a simple Transformer-based approach to RVOS. Our framework, termed
Multimodal Tracking Transformer (MTTR), models the RVOS task as a sequence
prediction problem. Following recent advancements in computer vision and
natural language processing, MTTR is based on the realization that video and
text can be processed together effectively and elegantly by a single multimodal
Transformer model. MTTR is end-to-end trainable, free of text-related inductive
bias components and requires no additional mask-refinement post-processing
steps. As such, it simplifies the RVOS pipeline considerably compared to
existing methods. Evaluation on standard benchmarks reveals that MTTR
significantly outperforms previous art across multiple metrics. In particular,
MTTR shows impressive +5.7 and +5.0 mAP gains on the A2D-Sentences and
JHMDB-Sentences datasets respectively, while processing 76 frames per second.
In addition, we report strong results on the public validation set of
Refer-YouTube-VOS, a more challenging RVOS dataset that has yet to receive the
attention of researchers. The code to reproduce our experiments is available at
https://github.com/mttr2021/MTTR