Enhancing Prototypical Few-Shot Learning by Leveraging the Local-Level Strategy
Adversarial purification using generative models demonstrates strong
adversarial defense performance. These methods are classifier and
attack-agnostic, making them versatile but often computationally intensive.
Recent strides in diffusion and score networks have improved image generation
and, by extension, adversarial purification. Another highly efficient class of
adversarial defense methods known as adversarial training requires specific
knowledge of attack vectors, forcing them to be trained extensively on
adversarial examples. To overcome these limitations, we introduce a new
framework, namely Language Guided Adversarial Purification (LGAP), utilizing
pre-trained diffusion models and caption generators to defend against
adversarial attacks. Given an input image, our method first generates a
caption, which is then used to guide the adversarial purification process
through a diffusion network. Our approach has been evaluated against strong
adversarial attacks, proving its effectiveness in enhancing adversarial
robustness. Our results indicate that LGAP outperforms most existing
adversarial defense techniques without requiring specialized network training.
This underscores the generalizability of models trained on large datasets,
highlighting a promising direction for further research.