Brand Visibility in Packaging: A Deep Learning Approach for Logo Detection, Saliency-Map Prediction, and Logo Placement Analysis
Most recent scribble-supervised segmentation methods commonly adopt a CNN
framework with an encoder-decoder architecture. Despite its multiple benefits,
this framework generally can only capture small-range feature dependency for
the convolutional layer with the local receptive field, which makes it
difficult to learn global shape information from the limited information
provided by scribble annotations. To address this issue, this paper proposes a
new CNN-Transformer hybrid solution for scribble-supervised medical image
segmentation called ScribFormer. The proposed ScribFormer model has a
triple-branch structure, i.e., the hybrid of a CNN branch, a Transformer
branch, and an attention-guided class activation map (ACAM) branch.
Specifically, the CNN branch collaborates with the Transformer branch to fuse
the local features learned from CNN with the global representations obtained
from Transformer, which can effectively overcome limitations of existing
scribble-supervised segmentation methods. Furthermore, the ACAM branch assists
in unifying the shallow convolution features and the deep convolution features
to improve model's performance further. Extensive experiments on two public
datasets and one private dataset show that our ScribFormer has superior
performance over the state-of-the-art scribble-supervised segmentation methods,
and achieves even better results than the fully-supervised segmentation
methods. The code is released at https://github.com/HUANGLIZI/ScribFormer.