Optimal Representations for Covariate Shift
The drone has been used for various purposes, including military
applications, aerial photography, and pesticide spraying. However, the drone is
vulnerable to external disturbances, and malfunction in propellers and motors
can easily occur. To improve the safety of drone operations, one should detect
the mechanical faults of drones in real-time. This paper proposes a sound-based
deep neural network (DNN) fault classifier and drone sound dataset. The dataset
was constructed by collecting the operating sounds of drones from microphones
mounted on three different drones in an anechoic chamber. The dataset includes
various operating conditions of drones, such as flight directions (front, back,
right, left, clockwise, counterclockwise) and faults on propellers and motors.
The drone sounds were then mixed with noises recorded in five different spots
on the university campus, with a signal-to-noise ratio (SNR) varying from 10 dB
to 15 dB. Using the acquired dataset, we train a DNN classifier, 1DCNN-ResNet,
that classifies the types of mechanical faults and their locations from
short-time input waveforms. We employ multitask learning (MTL) and incorporate
the direction classification task as an auxiliary task to make the classifier
learn more general audio features. The test over unseen data reveals that the
proposed multitask model can successfully classify faults in drones and
outperforms single-task models even with less training data.