MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation
Generating images from natural language instructions is an intriguing yet
highly challenging task. We approach text-to-image generation by combining the
power of the retrained CLIP representation with an off-the-shelf image
generator (GANs), optimizing in the latent space of GAN to find images that
achieve maximum CLIP score with the given input text. Compared to traditional
methods that train generative models from text to image starting from scratch,
the CLIP+GAN approach is training-free, zero shot and can be easily customized
with different generators.
However, optimizing CLIP score in the GAN space casts a highly challenging
optimization problem and off-the-shelf optimizers such as Adam fail to yield
satisfying results. In this work, we propose a FuseDream pipeline, which
improves the CLIP+GAN approach with three key techniques: 1) an AugCLIP score
which robustifies the CLIP objective by introducing random augmentation on
image. 2) a novel initialization and over-parameterization strategy for
optimization which allows us to efficiently navigate the non-convex landscape
in GAN space. 3) a composed generation technique which, by leveraging a novel
bi-level optimization formulation, can compose multiple images to extend the
GAN space and overcome the data-bias.
When promoted by different input text, FuseDream can generate high-quality
images with varying objects, backgrounds, artistic styles, even novel
counterfactual concepts that do not appear in the training data of the GAN we
use. Quantitatively, the images generated by FuseDream yield top-level
Inception score and FID score on MS COCO dataset, without additional
architecture design or training. Our code is publicly available at
\url{https://github.com/gnobitab/FuseDream}.