Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction
Driven by large-data pre-training, Segment Anything Model (SAM) has been
demonstrated as a powerful and promptable framework, revolutionizing the
segmentation models. Despite the generality, customizing SAM for specific
visual concepts without man-powered prompting is under explored, e.g.,
automatically segmenting your pet dog in different images. In this paper, we
propose a training-free Personalization approach for SAM, termed as PerSAM.
Given only a single image with a reference mask, PerSAM first localizes the
target concept by a location prior, and segments it within other images or
videos via three techniques: target-guided attention, target-semantic
prompting, and cascaded post-refinement. In this way, we effectively adapt SAM
for private use without any training. To further alleviate the mask ambiguity,
we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the
entire SAM, we introduce two learnable weights for multi-scale masks, only
training 2 parameters within 10 seconds for improved performance. To
demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for
personalized evaluation, and test our methods on video object segmentation with
competitive performance. Besides, our approach can also enhance DreamBooth to
personalize Stable Diffusion for text-to-image generation, which discards the
background disturbance for better target appearance learning. Code is released
at https://github.com/ZrrSkywalker/Personalize-SAM