SNeRF: Stylized Neural Implicit Representations for 3D Scenes

A the latest paper on appears to be into the trouble of stylizing 3D scenes to match a reference style impression. For instance, with a VR headset on, a single could see the planet as a result of the creative lenses of Pablo Picasso alternatively of becoming constrained by the serious environment. Researchers suggest to mix neural radiance fields (NeRF) and image-based neural model transfer to complete 3D scene stylization.

SNeRF’s stylization results on 360°scenes using NeRF++. Image credit: arXiv:2207.02363 [cs.CV]

SNeRF’s stylization results on 360°scenes utilizing NeRF++. Image credit: arXiv:2207.02363 [cs.CV]

NeRF provides a solid inductive bias to sustain multi-see regularity and neural style transfer permits a flexible stylization method that does not call for focused instance inputs. Additionally, a novel instruction plan lessens the GPU memory requirement throughout schooling, enabling superior-resolution success on a single modern GPU.

Evaluations display that the proposed approach delivers superior impression and video clip quality than point out-of-the-art procedures.

This paper presents a stylized novel check out synthesis method. Applying point out-of-the-artwork stylization techniques to novel views frame by body generally brings about jittering artifacts owing to the absence of cross-check out consistency. Consequently, this paper investigates 3D scene stylization that presents a potent inductive bias for reliable novel watch synthesis. Especially, we undertake the rising neural radiance fields (NeRF) as our choice of 3D scene representation for their ability to render large-high-quality novel sights for a wide range of scenes. Nevertheless, as rendering a novel see from a NeRF calls for a massive quantity of samples, teaching a stylized NeRF needs a substantial sum of GPU memory that goes outside of an off-the-shelf GPU capability. We introduce a new training process to deal with this problem by alternating the NeRF and stylization optimization measures. This sort of a approach permits us to make whole use of our components memory capacity to the two crank out visuals at bigger resolution and undertake additional expressive graphic design and style transfer techniques. Our experiments display that our process generates stylized NeRFs for a huge assortment of information, which includes indoor, out of doors and dynamic scenes, and synthesizes high-high-quality novel sights with cross-view regularity.

Research article: Nguyen-Phuoc, T., Liu, F., and Xiao, L., “SNeRF: Stylized Neural Implicit Representations for 3D Scenes”, 2022. Website link: muscles/2207.02363