Edge-aware Guidance Fusion Network for RGB Thermal Scene Parsing
Roadside camera-driven 3D object detection is a crucial task in intelligent
transportation systems, which extends the perception range beyond the
limitations of vision-centric vehicles and enhances road safety. While previous
studies have limitations in using only depth or height information, we find
both depth and height matter and they are in fact complementary. The depth
feature encompasses precise geometric cues, whereas the height feature is
primarily focused on distinguishing between various categories of height
intervals, essentially providing semantic context. This insight motivates the
development of Complementary-BEV (CoBEV), a novel end-to-end monocular 3D
object detection framework that integrates depth and height to construct robust
BEV representations. In essence, CoBEV estimates each pixel's depth and height
distribution and lifts the camera features into 3D space for lateral fusion
using the newly proposed two-stage complementary feature selection (CFS)
module. A BEV feature distillation framework is also seamlessly integrated to
further enhance the detection accuracy from the prior knowledge of the
fusion-modal CoBEV teacher. We conduct extensive experiments on the public 3D
detection benchmarks of roadside camera-based DAIR-V2X-I and Rope3D, as well as
the private Supremind-Road dataset, demonstrating that CoBEV not only achieves
the accuracy of the new state-of-the-art, but also significantly advances the
robustness of previous methods in challenging long-distance scenarios and noisy
camera disturbance, and enhances generalization by a large margin in
heterologous settings with drastic changes in scene and camera parameters. For
the first time, the vehicle AP score of a camera model reaches 80% on
DAIR-V2X-I in terms of easy mode. The source code will be made publicly
available at https://github.com/MasterHow/CoBEV.