TY - GEN
T1 - Capture to Rendering Pipeline for Generating Dynamically Relightable Virtual Objects with Handheld RGB-D Cameras
AU - Chen, Chih Fan
AU - Suma Rosenberg, Evan
N1 - Publisher Copyright:
© 2020 ACM.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/11/1
Y1 - 2020/11/1
N2 - We present a complete end-to-end pipeline for generating dynamically relightable virtual objects captured using a single handheld consumer-grade RGB-D camera. The proposed system plausibly replicates the geometry, texture, illumination, and surface reflectance properties of non-Lambertian objects, making them suitable for integration within virtual reality scenes that contain arbitrary illumination. First, the geometry of the target object is reconstructed from depth images captured using a handheld camera. To get nearly drift-free texture maps of the virtual object, a set of selected images from the original color stream is used for camera pose optimization. Our approach further separates these images into diffuse (view-independent) and specular (view-dependent) components using low-rank decomposition. The lighting conditions during capture and reflectance properties of the virtual object are subsequently estimated from the computed specular maps. By combining these parameters with the diffuse texture, the reconstructed model can then be rendered in real-time virtual reality scenes that plausibly replicate real world illumination at the point of capture. Furthermore, these objects can interact with arbitrary virtual lights that vary in direction, intensity, and color.
AB - We present a complete end-to-end pipeline for generating dynamically relightable virtual objects captured using a single handheld consumer-grade RGB-D camera. The proposed system plausibly replicates the geometry, texture, illumination, and surface reflectance properties of non-Lambertian objects, making them suitable for integration within virtual reality scenes that contain arbitrary illumination. First, the geometry of the target object is reconstructed from depth images captured using a handheld camera. To get nearly drift-free texture maps of the virtual object, a set of selected images from the original color stream is used for camera pose optimization. Our approach further separates these images into diffuse (view-independent) and specular (view-dependent) components using low-rank decomposition. The lighting conditions during capture and reflectance properties of the virtual object are subsequently estimated from the computed specular maps. By combining these parameters with the diffuse texture, the reconstructed model can then be rendered in real-time virtual reality scenes that plausibly replicate real world illumination at the point of capture. Furthermore, these objects can interact with arbitrary virtual lights that vary in direction, intensity, and color.
KW - content creation
KW - reconstruction
KW - scanning
KW - virtual reality
UR - http://www.scopus.com/inward/record.url?scp=85095820969&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85095820969&partnerID=8YFLogxK
U2 - 10.1145/3385956.3418952
DO - 10.1145/3385956.3418952
M3 - Conference contribution
AN - SCOPUS:85095820969
T3 - Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST
BT - Proceedings - VRST 2020
A2 - Spencer, Stephen N.
PB - Association for Computing Machinery
T2 - 26th ACM Symposium on Virtual Reality Software and Technology, VRST 2020
Y2 - 1 November 2020 through 4 November 2020
ER -