Pyramid Adversarial Training Improves ViT Performance
Autonomous driving on water surfaces plays an essential role in executing
hazardous and time-consuming missions, such as maritime surveillance, survivors
rescue, environmental monitoring, hydrography mapping and waste cleaning. This
work presents WaterScenes, the first multi-task 4D radar-camera fusion dataset
for autonomous driving on water surfaces. Equipped with a 4D radar and a
monocular camera, our Unmanned Surface Vehicle (USV) proffers all-weather
solutions for discerning object-related information, including color, shape,
texture, range, velocity, azimuth, and elevation. Focusing on typical static
and dynamic objects on water surfaces, we label the camera images and radar
point clouds at pixel-level and point-level, respectively. In addition to basic
perception tasks, such as object detection, instance segmentation and semantic
segmentation, we also provide annotations for free-space segmentation and
waterline segmentation. Leveraging the multi-task and multi-modal data, we
conduct benchmark experiments on the uni-modality of radar and camera, as well
as the fused modalities. Experimental results demonstrate that 4D radar-camera
fusion can considerably improve the accuracy and robustness of perception on
water surfaces, especially in adverse lighting and weather conditions.
WaterScenes dataset is public on https://waterscenes.github.io.