Text2Video-Zero:文本到图像扩散模型是零样本视频生成器 Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators

作者:Levon Khachatryan Andranik Movsisyan Vahram Tadevosyan Roberto Henschel Zhangyang Wang Shant Navasardyan Humphrey Shi


Recent text-to-video generation approaches rely on computationally heavytraining and require large-scale video datasets. In this paper, we introduce anew task of zero-shot text-to-video generation and propose a low-cost approach(without any training or optimization) by leveraging the power of existingtext-to-image synthesis methods (e.g., Stable Diffusion), making them suitablefor the video domain. Our key modifications include (i) enriching the latent codes of the generatedframes with motion dynamics to keep the global scene and the background timeconsistent; and (ii) reprogramming frame-level self-attention using a newcross-frame attention of each frame on the first frame, to preserve thecontext, appearance, and identity of the foreground object. Experiments show that this leads to low overhead, yet high-quality andremarkably consistent video generation. Moreover, our approach is not limitedto text-to-video synthesis but is also applicable to other tasks such asconditional and content-specialized video generation, and VideoInstruct-Pix2Pix, i.e., instruction-guided video editing. As experiments show, our method performs comparably or sometimes better thanrecent approaches, despite not being trained on additional video data. Our codewill be open sourced at: https://github.com/Picsart-AI-Research/Text2Video-Zero .



Related posts