FCN-Transformer Feature Fusion for Polyp Segmentation
Colonoscopy is widely recognised as the gold standard procedure for the early
detection of colorectal cancer (CRC). Segmentation is valuable for two
significant clinical applications, namely lesion detection and classification,
providing means to improve accuracy and robustness. The manual segmentation of
polyps in colonoscopy images is time-consuming. As a result, the use of deep
learning (DL) for automation of polyp segmentation has become important.
However, DL-based solutions can be vulnerable to overfitting and the resulting
inability to generalise to images captured by different colonoscopes. Recent
transformer-based architectures for semantic segmentation both achieve higher
performance and generalise better than alternatives, however typically predict
a segmentation map of $\frac{h}{4}\times\frac{w}{4}$ spatial dimensions for a
$h\times w$ input image. To this end, we propose a new architecture for
full-size segmentation which leverages the strengths of a transformer in
extracting the most important features for segmentation in a primary branch,
while compensating for its limitations in full-size prediction with a secondary
fully convolutional branch. The resulting features from both branches are then
fused for final prediction of a $h\times w$ segmentation map. We demonstrate
our method's state-of-the-art performance with respect to the mDice, mIoU,
mPrecision, and mRecall metrics, on both the Kvasir-SEG and CVC-ClinicDB
dataset benchmarks. Additionally, we train the model on each of these datasets
and evaluate on the other to demonstrate its superior generalisation
performance.