TriPlaneNet:一种用于EG3D反演的编码器 TriPlaneNet: An Encoder for EG3D Inversion

作者:Ananta R. Bhattarai Matthias Nießner Artem Sevastopolsky

基于NeRF的GANs的最新进展已经引入了许多用于人头的高分辨率和高保真度生成建模的方法,这些方法不可能用于新的视图渲染。同时,必须解决一个相反的问题,才能重新渲染或修改现有的图像或视频。尽管基于通用优化的2D GAN反演方法取得了成功,但应用于3D GAN的方法可能无法产生3D一致性。由于缺乏身份保护,基于快速编码器的技术,如为StyleGAN开发的技术,也可能不那么吸引人。在我们的工作中,我们引入了一种实时方法,通过直接利用EG3D生成模型引入的三平面表示来弥合这两种方法之间的差距。特别地,我们为潜在码建立了一个前馈卷积编码器,并用三平面数值偏移的全卷积预测器对其进行了扩展。如我们的作品所示,渲染的质量与o相似

Recent progress in NeRF-based GANs has introduced a number of approaches forhigh-resolution and high-fidelity generative modeling of human heads with apossibility for novel view rendering. At the same time, one must solve aninverse problem to be able to re-render or modify an existing image or video.Despite the success of universal optimization-based methods for 2D GANinversion, those, applied to 3D GANs, may fail to produce 3D-consistentrenderings. Fast encoder-based techniques, such as those developed forStyleGAN, may also be less appealing due to the lack of identity preservation.In our work, we introduce a real-time method that bridges the gap between thetwo approaches by directly utilizing the tri-plane representation introducedfor EG3D generative model. In particular, we build upon a feed-forwardconvolutional encoder for the latent code and extend it with afully-convolutional predictor of tri-plane numerical offsets. As shown in ourwork, the renderings are similar in quality to optimization-based techniquesand significantly outperform the baselines for novel view. As we empiricallyprove, this is a consequence of directly operating in the tri-plane space, notin the GAN parameter space, while making use of an encoder-based trainableapproach.



Related posts