Milking CowMask for Semi-Supervised Image Classification
In recent computer vision research, the advent of the Vision Transformer
(ViT) has rapidly revolutionized various architectural design efforts: ViT
achieved state-of-the-art image classification performance using self-attention
found in natural language processing, and MLP-Mixer achieved competitive
performance using simple multi-layer perceptrons. In contrast, several studies
have also suggested that carefully redesigned convolutional neural networks
(CNNs) can achieve advanced performance comparable to ViT without resorting to
these new ideas. Against this background, there is growing interest in what
inductive bias is suitable for computer vision. Here we propose Sequencer, a
novel and competitive architecture alternative to ViT that provides a new
perspective on these issues. Unlike ViTs, Sequencer models long-range
dependencies using LSTMs rather than self-attention layers. We also propose a
two-dimensional version of Sequencer module, where an LSTM is decomposed into
vertical and horizontal LSTMs to enhance performance. Despite its simplicity,
several experiments demonstrate that Sequencer performs impressively well:
Sequencer2D-L, with 54M parameters, realizes 84.6% top-1 accuracy on only
ImageNet-1K. Not only that, we show that it has good transferability and the
robust resolution adaptability on double resolution-band.