TY - GEN
T1 - Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping
AU - Chen, Chih Fan
AU - Bolas, Mark
AU - Suma, Evan
N1 - Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2016/7/24
Y1 - 2016/7/24
N2 - With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint.
AB - With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint.
KW - 3D reconstruction
KW - Real-time rendering
KW - View-dependent texture mapping
UR - http://www.scopus.com/inward/record.url?scp=84985902379&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84985902379&partnerID=8YFLogxK
U2 - 10.1145/2945078.2945162
DO - 10.1145/2945078.2945162
M3 - Conference contribution
AN - SCOPUS:84985902379
T3 - SIGGRAPH 2016 - ACM SIGGRAPH 2016 Posters
BT - SIGGRAPH 2016 - ACM SIGGRAPH 2016 Posters
PB - Association for Computing Machinery, Inc
T2 - ACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016
Y2 - 24 July 2016 through 28 July 2016
ER -