Boosting Contrastive Self-Supervised Learning with False Negative Cancellation
LiDAR and camera are two important sensors for 3D object detection in
autonomous driving. Despite the increasing popularity of sensor fusion in this
field, the robustness against inferior image conditions, e.g., bad illumination
and sensor misalignment, is under-explored. Existing fusion methods are easily
affected by such conditions, mainly due to a hard association of LiDAR points
and image pixels, established by calibration matrices. We propose TransFusion,
a robust solution to LiDAR-camera fusion with a soft-association mechanism to
handle inferior image conditions. Specifically, our TransFusion consists of
convolutional backbones and a detection head based on a transformer decoder.
The first layer of the decoder predicts initial bounding boxes from a LiDAR
point cloud using a sparse set of object queries, and its second decoder layer
adaptively fuses the object queries with useful image features, leveraging both
spatial and contextual relationships. The attention mechanism of the
transformer enables our model to adaptively determine where and what
information should be taken from the image, leading to a robust and effective
fusion strategy. We additionally design an image-guided query initialization
strategy to deal with objects that are difficult to detect in point clouds.
TransFusion achieves state-of-the-art performance on large-scale datasets. We
provide extensive experiments to demonstrate its robustness against degenerated
image quality and calibration errors. We also extend the proposed method to the
3D tracking task and achieve the 1st place in the leaderboard of nuScenes
tracking, showing its effectiveness and generalization capability.