HSTFormer: Hierarchical Spatial-Temporal Transformers for 3D Human Pose Estimation
Zero-shot learning (ZSL) aims to predict unseen classes whose samples have
never appeared during training. One of the most effective and widely used
semantic information for zero-shot image classification are attributes which
are annotations for class-level visual characteristics. However, the current
methods often fail to discriminate those subtle visual distinctions between
images due to not only the shortage of fine-grained annotations, but also the
attribute imbalance and co-occurrence. In this paper, we present a
transformer-based end-to-end ZSL method named DUET, which integrates latent
semantic knowledge from the pre-trained language models (PLMs) via a
self-supervised multi-modal learning paradigm. Specifically, we (1) developed a
cross-modal semantic grounding network to investigate the model's capability of
disentangling semantic attributes from the images; (2) applied an
attribute-level contrastive learning strategy to further enhance the model's
discrimination on fine-grained visual characteristics against the attribute
co-occurrence and imbalance; (3) proposed a multi-task learning policy for
considering multi-model objectives. We find that our DUET can achieve
state-of-the-art performance on three standard ZSL benchmarks and a knowledge
graph equipped ZSL benchmark. Its components are effective and its predictions
are interpretable.