RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL
Medical image segmentation is important for computer-aided diagnosis. Good
segmentation demands the model to see the big picture and fine details
simultaneously, i.e., to learn image features that incorporate large context
while keep high spatial resolutions. To approach this goal, the most widely
used methods -- U-Net and variants, extract and fuse multi-scale features.
However, the fused features still have small "effective receptive fields" with
a focus on local image cues, limiting their performance. In this work, we
propose Segtran, an alternative segmentation framework based on transformers,
which have unlimited "effective receptive fields" even at high feature
resolutions. The core of Segtran is a novel Squeeze-and-Expansion transformer:
a squeezed attention block regularizes the self attention of transformers, and
an expansion block learns diversified representations. Additionally, we propose
a new positional encoding scheme for transformers, imposing a continuity
inductive bias for images. Experiments were performed on 2D and 3D medical
image segmentation tasks: optic disc/cup segmentation in fundus images
(REFUGE'20 challenge), polyp segmentation in colonoscopy images, and brain
tumor segmentation in MRI scans (BraTS'19 challenge). Compared with
representative existing methods, Segtran consistently achieved the highest
segmentation accuracy, and exhibited good cross-domain generalization
capabilities. The source code of Segtran is released at
https://github.com/askerlee/segtran.