Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification
Autonomous driving is a popular research area within the computer vision
research community. Since autonomous vehicles are highly safety-critical,
ensuring robustness is essential for real-world deployment. While several
public multimodal datasets are accessible, they mainly comprise two sensor
modalities (camera, LiDAR) which are not well suited for adverse weather. In
addition, they lack far-range annotations, making it harder to train neural
networks that are the base of a highway assistant function of an autonomous
vehicle. Therefore, we introduce a multimodal dataset for robust autonomous
driving with long-range perception. The dataset consists of 176 scenes with
synchronized and calibrated LiDAR, camera, and radar sensors covering a
360-degree field of view. The collected data was captured in highway, urban,
and suburban areas during daytime, night, and rain and is annotated with 3D
bounding boxes with consistent identifiers across frames. Furthermore, we
trained unimodal and multimodal baseline models for 3D object detection. Data
are available at \url{https://github.com/aimotive/aimotive_dataset}.