Learning Efficient, Explainable and Discriminative Representations for Pulmonary Nodules Classification
Methods for extracting audio and speech features have been studied since
pioneering work on spectrum analysis decades ago. Recent efforts are guided by
the ambition to develop general-purpose audio representations. For example,
deep neural networks can extract optimal embeddings if they are trained on
large audio datasets. This work extends existing methods based on
self-supervised learning by bootstrapping, proposes various encoder
architectures, and explores the effects of using different pre-training
datasets. Lastly, we present a novel training framework to come up with a
hybrid audio representation, which combines handcrafted and data-driven learned
audio features. All the proposed representations were evaluated within the HEAR
NeurIPS 2021 challenge for auditory scene classification and timestamp
detection tasks. Our results indicate that the hybrid model with a
convolutional transformer as the encoder yields superior performance in most
HEAR challenge tasks.