AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training

Yifan Jiang1, 2, Peter Hedman2, Ben Mildenhall2, Dejia Xu1, Jonathan T. Barron2,
Zhangyang Wang1, Tianfan Xue3

1 UT Austin   2 Google Research,   3 CUHK

[Paper]             [Supplementary Materials]       [Visual Comparison]



Abstract

Neural Radiance Fields (NeRFs) are a powerful representation for modeling a 3D scene as a continuous function. Though NeRF is able to render complex 3D scenes with view-dependent effects, few efforts have been devoted to exploring its limits in a high-resolution setting. Specifically, existing NeRF-based methods face several limitations when reconstructing high-resolution real scenes, including a very large number of parameters, misaligned input data, and overly smooth details. In this work, we conduct the first pilot study on training NeRF with high-resolution data and propose the corresponding solutions: 1) marrying the multilayer perceptron (MLP) with convolutional layers which can encode more neighborhood information while reducing the total number of parameters; 2) a novel training strategy to address misalignment caused by moving objects or small camera calibration errors; and 3) a high-frequency aware loss. Our approach is nearly free without introducing obvious training/testing costs, while experiments on different datasets demonstrate that it can recover more high-frequency details compared with the current state-of-the-art NeRF models.

Misalignment Found in NeRFs

Analysis of misalignment between rendered and ground truth images. mip-NeRF 360++: Images rendered by a stronger mip-NeRF 360 model (16x larger MLPs than the original). Ground Truth: The captured images used for training and testing. Optical Flow: Optical flow between the mip-NeRF 360++ and ground truth images, estimated by PWC-Net. Significant misalignment is present in both training and test view renderings.

Visual Comparisons

More interative comparisons: [Click here]

Video Results

bicycle

gardenvase

stump

flowerbed

treehill


Citation

If you want to cite our work, please use::
      
        @inproceedings{jiang2023alignerf,
          title={AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training},
          author={Jiang, Yifan and Hedman, Peter and Mildenhall, Ben and Xu, Dejia and Barron, Jonathan T and Wang, Zhangyang and Xue, Tianfan},
          booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
          pages={46--55},
          year={2023}
        }