This paper addresses the problem of inverse rendering from photometric images. Existing approaches for this problem suffer from the effects of self-shadows, inter-reflections, and lack of constraints on the surface reflectance, leading to inaccurate decomposition of reflectance and illumination due to the ill-posed nature of inverse rendering. In this work, we propose a new method for neural inverse rendering. Our method jointly optimizes the light source position to account for the self-shadows in images, and computes indirect illumination using a differentiable rendering layer and an importance sampling strategy. To enhance surface reflectance decomposition, we introduce a new regularization by distilling DINO features to foster accurate and consistent material decomposition. Extensive experiments on synthetic and real datasets demonstrate that our method outperforms the state-of-the-art methods in reflectance decomposition.
Our method jointly optimizes the light source position to account for self-shadows and models inter-reflection. To alleviate the ill-posed nature of inverse rendering (multiple solutions and limited observations), we inject DINO features into the networks of specular albedo and roughness as a prior knowledge of material grouping, to regularize the material decomposition.
Our method predicts consistent specular albedo and specular roughness by explicitly ray tracing and shading modeling. Given estimated geometry (mesh) and materials (diffuse albedo, specular albedo, specular roughness), we can export the 3D assets in traditional graphics pipeline. Compared with IRON, our method achieves more consistent and accurate decomposition results.
@misc{bao2024pir,
title={Photometric Inverse Rendering: Shading Cues Modeling and Surface Reflectance Regularization},
author={Jingzhi Bao and Guanying Chen and Shuguang Cui},
year={2024},
eprint={2408.06828},
archivePrefix={arXiv},
primaryClass={cs.CV},
}