abandoned.ai relies on two different types of technology in order to facilitate its mission:
- Capturing Methods
- Rendering Methods
On occasion the boundaries between these resource areas can become obfuscated as different software and hardware applications package them together to create a finished product. This article serves to be a broad overview of how these technologies came about and why they’re all pertinent to the current zeitgeist enabling the existence of abandoned.ai.
(1) Photography (1822)
Method : Capture
Image attribution
A technology as old as time, yet still a crucial part of the process when creating digital places. Many different types of cameras exist in various different configurations, some better than others when it comes to spatial reconstruction. There are 360° cameras that can grab entire environments in one go, there are automated pan and tilt cameras that can scan rooms on their own.
For guidelines see the section : Photography
(2) Photogrammetry (1867)
Method : Capture
Image attribution
To capture an entire environment, you need a lot of data. Using photographs, the key is to just take more pictures. For over a century people have understood how to combine pictures to form a facsimile of a larger space. Photogrammetry in it’s raw element is a lot like a panorama, uninformed of geometry but a vast depiction of space in a rasterized form. It took a while photogrammetric data could be processed into something meaningful.
For guidelines see the section : Photogrammetry
(3) Structure from Motion (1976)
Method : Rendering
Image attribution
From this point onward emphasis shifts away from the camera, towards the computer. Structure from Motion (SfM) takes a series of images which share overlapping identifiable features (known as landmarks) and puts them through a localizing algorithm. This allows the computer to create points in three dimensional space that refer to areas of the image. We use COLMAP to accomplish SfM in our pipelines.
For guidelines see the section : Structure from Motion
(4) NeRF (2020)
Method : Rendering
Image attribution
SfM mostly gives us, as the name implies, structure, which comes in the form of point clouds. These are often imperfect, sparse representations of objects and space which lack scalable representations of color and luminosity. Due to recent advancements in Machine Learning, Neural Radiance Fields (NeRF) have emerged as an enhanced way to convert photogrammetric images into spatial data. Radiance fields incorporate physical aspects of light including directionality to create a more realistic representation of objects. Although highly effective, NeRFs require substantial compute to both train and render (view).
For guidelines see the section : NeRF
(5) Gaussian Splatting (2022)
This is the method abandoned.ai currently uses
Method : Rendering
Gaussian Splatting
Image attribution
NeRF provides a baseline for rendering spaces, yet at times it does so perhaps a bit to literally. Radiance fields can be represented in different ways, normally a NeRF viewer would express the entire trajectory of a beam of light which means scenes need to be completely re-rendered after changing the view. However, these can be compressed into gaussians, small blurred particles that roughly express the light in a way that is much less computationally intense.
For guidelines see the section : Gaussian Splatting
(6) SMERF (2023)
Method : Rendering
Image attribution
Announced only a day before writing this, SMERF is a novel way to stream gaussian like objects dynamically, allowing multiple captured environments to be blended together and seamlessly loaded between. Previous rendering methods were sufficient only for individual rooms and isolated scans, but with this technology we are able to recreate entire floors of buildings and stream their contents to clients over the web.
For guidelines see the section : SMERF