DiverseDream

Diverse Text-to-3D Synthesis
with Augmented Text Embedding

ECCV 2024

1VinAI Research, 2Trinity College Dublin
*Equal contribution

DiverseDream can generated diverse 3D object given the same text prompt.

Abstract

Text-to-3D synthesis has recently emerged as a new approach to sampling 3D models by adopting pretrained text-to-image models as guiding visual priors. An intriguing but underexplored problem with existing text-to-3D methods is that 3D models obtained from the sampling-by-optimization procedure tend to have mode collapses, and hence poor diversity in their results. In this paper, we provide an analysis and identify potential causes of such a limited diversity, which motivates us to devise a new method that considers the joint generation of different 3D models from the same text prompt. We propose to use augmented text prompts via textual inversion of reference images to diversify the joint generation. We show that our method leads to improved diversity in text-to-3D synthesis qualitatively and quantitatively.

Pipeline Method

DiverseDream pipeline

We translate the diversity of augmented text prompts to the resulting 3D models via a two-stage method.
Stage 1: HiPer tokens inversion (left): for each reference image, we seek to learn a HiPer token $h_i$ so that the prompt $[y; h_i]$ reconstructs the reference image.
Stage 2: Textual score distillation (right): we run a multi-particle variational inference for optimizing the 3D models from text prompt $y$. For each iteration in the optimization, we randomly sample a particle $\theta_i$ with its rendered image $x_i$. We use the augmented text prompt $y'_i = [y; h^*_i;\phi]$, with $\phi$ as shared embedding to condition the optimization of $\theta_i$. These two are taken as input to a pretrained Stable Diffusion to compute the TSD loss and MSE loss to update the NeRF and shared learnable token $\phi$ iteratively.

Comparision

BibTeX

@inproceedings{DiverseDream,
      title={Diverse Text-to-3D Synthesis with Augmented Text Embedding}, 
      author={Uy Dieu Tran, Minh Luu, Phong Ha Nguyen, Khoi Nguyen, Binh-Son Hua},
      year={2024},
      booktitle={ECCV},
  }