What i can't tell from the Immerge articles i've read (sorry I haven't watched video yet, at work) is if they record "eye space 3d" or just "focus 3d"? (for lack of better terms!)
i.e. I get that you can derive spacial information from focus (focus near: far = fuzzy, focus far: near = fuzzy) but the interpretation of "depth" by our brains is much more about the distance between our eyes right? Plus at a certain focal length at a certain aperture (i.e. your eye in certain light) the depth of field for far away objects is rather large (i.e. "everything" is in focus past a certain point)
The spherical shape of the Immerge makes me assume it's just recording focus but it could easy have two "eyes" at a distance within that sphere too... Going by what I've seen of their first two cameras and how they could do a "slight 3d" change when viewing I assume it's the prior. Does anyone know?
Anyway only a matter of time before we have a mobile light-field 3d video camera... combine that with eye tracking software on a VR headset to determine the object to focus on... that would be amazing. Imagine a movie filmed like that: you're a person in the room or a fly on the wall in every single scene