Improving Sign Language Translation with Monolingual Data by Sign Back-Translation
This work investigates a simple yet powerful dense prediction task adapter
for Vision Transformer (ViT). Unlike recently advanced variants that
incorporate vision-specific inductive biases into their architectures, the
plain ViT suffers inferior performance on dense predictions due to weak prior
assumptions. To address this issue, we propose the ViT-Adapter, which allows
plain ViT to achieve comparable performance to vision-specific transformers.
Specifically, the backbone in our framework is a plain ViT that can learn
powerful representations from large-scale multi-modal data. When transferring
to downstream tasks, a pre-training-free adapter is used to introduce the
image-related inductive biases into the model, making it suitable for these
tasks. We verify ViT-Adapter on multiple dense prediction tasks, including
object detection, instance segmentation, and semantic segmentation. Notably,
without using extra detection data, our ViT-Adapter-L yields state-of-the-art
60.9 box AP and 53.0 mask AP on COCO test-dev. We hope that the ViT-Adapter
could serve as an alternative for vision-specific transformers and facilitate
future research. The code and models will be released at
https://github.com/czczup/ViT-Adapter.