News Nvidia Showcases Incredible Instant NeRF 2D to 3D Photo AI Processing

It has little use for most people.
Something like this could be used for easily capturing navigable 3D environments using standard camera equipment. For example, a real-estate company could walk through a home while recording video of each room, then let that get automatically converted into a 3D model of the house. A potential buyer could then navigate through a number of houses from home through their web browser, or even in VR, to get a better feel for each of them than what a collection of photos can provide. The same process could be used for creating a model to base renovations off of, with potential changes to a room tested out in real-time. And people might just want to take 3D photos capturing a scene in its entirety, like a panorama, but with the ability to move around within that space. Or perhaps a game or virtual chat program could allow a person to appear in the virtual environment as they appear in person.

Though as far as their demonstration video went, it wouldn't exactly be practical to have a subject pose perfectly still while you take numerous photos of them from every direction. And for that matter, I suspect the system would get confused if you tried to do that outside a controlled environment, with other people walking around and trees blowing in the wind, and so on. AI algorithms might be able to filter those things out, but it's probably not going to be happening in near-real-time on standard computer hardware, at least not yet.
 
It has little use for most people.
Photogrammetry is used for many things.. real estate, crime scene reconstruction, factory upgrades... heck anywhere you have complex equipment far away from the design team. You send a guy with a camera to capture the scene and then the design can figure out what new equipment fits in what spaces.

but it's finicky to get the images captured properly and is slow to recontruct. What that means to the worlflow is you capture images in the field and hope you got enough of the right areas in the right ways. Then you bring them back to the lab to rescontruct the scene which takes minutes to hours.. only to find out something wasn't right with the capture. Sometimes it's not possible to go back to the scene or the scene has changed enough that you have to recapture the entire thing again, and sometimes you don't recapture but instead spend days or weeks manually tweaking to get the render right.

If this tech can render instantly in the field on a commodity laptop, then that would be a game changer and very valuable to the right industries.
 
It won't take long for ladies to dress up in a dress they saw a movie star in and then take several front pics of themself and then use the rear pic of the movie star to make their butt look nice. 🤣
 
So essentially this is just Photogrammetry boosted by AI. It does have use in many 3D development fields. The downside to this method is the topology is usually a garbled mess. So you may get a solid texture out of it, but the geometry will need to be re-worked if efficiency is a concern. Then again looking at how unreal 5 handles high density geometry, this may become less of an issue.