Difference between revisions of "Neural Fields for Scalable Scene Reconstruction"
From Design Computation
Abel Maciel (talk | contribs) (→Conference Slides) |
Abel Maciel (talk | contribs) (→Presentation) |
||
Line 15: | Line 15: | ||
=Presentation= | =Presentation= | ||
− | [[File:YouTube.png |Left|50px|link=https:// | + | [[File:YouTube.png |Left|50px|link=https://youtu.be/tmuQJCVKTuI]] [https://youtu.be/tmuQJCVKTuI Video Recording]. |
=Conference Slides= | =Conference Slides= |
Revision as of 19:18, 15 March 2023
DC I/O 2022 Keynote by JAMES TOMPKIN. https://doi.org/10.47330/DCIO.2022.AXBL8798
Abstract
Neural fields are a new (and old!) approach to solving problems over spacetime via first-order optimization of a neural network. Over the past three years, combining neural fields with classic computer graphics approaches have allowed us to make significant advances in solving computer vision problems like scene reconstruction. I will present recent work that can reconstruct indoor scenes for photorealistic interactive exploration using new scalable hybrid neural field representations. This has applications where any real-world place needs to be digitized, especially for visualization purposes.
Presentation
Conference Slides
Keywords
Reference
DOI: https://doi.org/10.47330/DCIO.2022.AXBL8798
Bibliography
- Anderson, T.T., 2011. Complicating Heidegger and the Truth of Architecture. The Journal of Aesthetics and Art Criticism 69, 69–79.