MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
Unlike existing knowledge distillation methods focus on the baseline
settings, where the teacher models and training strategies are not that strong
and competing as state-of-the-art approaches, this paper presents a method
dubbed DIST to distill better from a stronger teacher. We empirically find that
the discrepancy of predictions between the student and a stronger teacher may
tend to be fairly severer. As a result, the exact match of predictions in KL
divergence would disturb the training and make existing methods perform poorly.
In this paper, we show that simply preserving the relations between the
predictions of teacher and student would suffice, and propose a
correlation-based loss to capture the intrinsic inter-class relations from the
teacher explicitly. Besides, considering that different instances have
different semantic similarities to each class, we also extend this relational
match to the intra-class level. Our method is simple yet practical, and
extensive experiments demonstrate that it adapts well to various architectures,
model sizes and training strategies, and can achieve state-of-the-art
performance consistently on image classification, object detection, and
semantic segmentation tasks. Code is available at:
https://github.com/hunto/DIST_KD .