Toshiba's New Camera Sensor Allows For Refocus After Shot

Status
Not open for further replies.
G

Guest

Guest
This... is exactly what I've been looking for. Now we can have the best focus all of any picture we take. =D
 

razor512

Distinguished
Jun 16, 2007
2,134
71
19,890
Seems like a good idea but we will need to know the resolution and the depth range. For example the lytro cameras allow refocusing, but the effect only works properly at macro ranges and the lens still has to focus

if you use a lytro in real life you will see that the lens has a focusing element and it does move. this is why if you take a macro show and try to focus on a building in the background, it will not be in complete focus compared to if you just focused on the building. There is a limit to how much it can refocus.

The problem with smartphones is the focusing system does not provide enough latitude in the focusing element which means that the post process focus will be much more limited than even with the lytro cameras.


Other than that, I feel that this will eventually become the next big evolution in camera technology.

Imagine being able to have a quality DSLR or cinematic camera with a F1.4 lens

and then have the ability to have the shallow depth of field from F1.4 and also have the ability to expand that depth of field in post.

Or being able to simply have a have a lens designed for a certain focal range, eg 1 foot to 50 feet, then use that for recording and remove the need for a follow focus because the follow focusing can be done in post with perfect tracking to a subject as they move to and from the camera, also allowing all scenes to have the eyes tact sharp (which even experienced focus pulling people have a lot of trouble with (if you look at movies such as the dark knight, even they could not get that perfect for every scene (it is not noticeable unless you go looking for it but post production focusing can get rid of even those lesser known inaccuracies that most people never notice)
 

gamebrigada

Distinguished
Jan 20, 2010
126
0
18,680
[citation][nom]Razor512[/nom]Seems like a good idea but we will need to know the resolution and the depth range. For example the lytro cameras allow refocusing, but the effect only works properly at macro ranges and the lens still has to focus if you use a lytro in real life you will see that the lens has a focusing element and it does move. this is why if you take a macro show and try to focus on a building in the background, it will not be in complete focus compared to if you just focused on the building. There is a limit to how much it can refocus.The problem with smartphones is the focusing system does not provide enough latitude in the focusing element which means that the post process focus will be much more limited than even with the lytro cameras.Other than that, I feel that this will eventually become the next big evolution in camera technology.Imagine being able to have a quality DSLR or cinematic camera with a F1.4 lensand then have the ability to have the shallow depth of field from F1.4 and also have the ability to expand that depth of field in post.Or being able to simply have a have a lens designed for a certain focal range, eg 1 foot to 50 feet, then use that for recording and remove the need for a follow focus because the follow focusing can be done in post with perfect tracking to a subject as they move to and from the camera, also allowing all scenes to have the eyes tact sharp (which even experienced focus pulling people have a lot of trouble with (if you look at movies such as the dark knight, even they could not get that perfect for every scene (it is not noticeable unless you go looking for it but post production focusing can get rid of even those lesser known inaccuracies that most people never notice)[/citation]

You sir are a little confused as to how the Lytro works. Or how a light field camera works entirely.

The lytro has NO moving lens parts. The reason being? Its entirely different then any regular camera that we have today. It doesn't focus light, it doesn't need to. It simply filters the light that enters the lens, so that light at radical angles does not interfere with the photo taken. The technology behind the lens is actually fairly simple, everything is in the software. The sensor directly has an image with hundreds of circular, extremely fisheyed versions of the shot taken. As there is a lens array on the sensor. In this way, you are not just capturing one angle of light from every pixel sized object, you are capturing multiple. Which in the software, later on, adjustments can be made, especially to extend the light sensitivity, and color saturation. Thats what a light field camera is really good at, the focusing aspect comes in later, by using all of these images, of different angles of light on the sensor, we can refocus on different parts of the picture.
 

InvalidError

Titan
Moderator
[citation][nom]gamebrigada[/nom]The technology behind the lens is actually fairly simple, everything is in the software.[/citation]
This is the most important point to emphasize.

The sensor itself is the same old CMOS or CCD technology. Hardware-wise, the only thing they do is slap a fancy compound fixed lens in front of it. The resulting raw output would make little to no sense to humans so software is required to put the compound image back together in a human-friendly format.

This likely has a lot in common with the MIT's 2GP camera project and telescope arrays.
 

razor512

Distinguished
Jun 16, 2007
2,134
71
19,890
@gamebrigada I am not saying that it relies directly on a focusing element for everything, I am saying that is uses a focusing element to extend the depth effect.

The light field sensor cannot capture all depth from macro to infinity, instead it has brackets of ranges that it can adjust in post and when you take a picture, the lens does focus to find the best bracket for the scene. That is why the in a macro situation, while you can adjust the focus, you will not be able to get distant objects in complete focus like you would if you shot the same scene minus the close object.

if you look at the lytro gallery you will see what I am talking about

https://pictures.lytro.com/lytroweb/stories/82377

even their own specs mention the limitation (they measure their depth in in their own made up term of light fields and the lens simply selects the focus setting that allows the camera to as evenly as possibly spread the available light fields across the objects in the image. that is why you will see certain images allow macro focusing and focusing on other objects within a certain range, then after a point there will be some objects clearly out of focus but you can not focus on them or notice any change between any distant objects when you click on each one. but on other images with similar distances, if no extreme macro object is present then distant objects easily become tact sharp

and as I said before, if you look at one in real life, you will see the focusing element in the lens move slightly (based on if it is aimed at your face and slightly away to looking at the something far behind you)

If the sensor could capture all depth info from 0mm to infinity, then it would not even need a lens. (the closest we have to that are laser holograms where virtually all light fields are captured and etched into the glass ( while they are monochromatic, the info present is enough that you can aim a camera at one and zoom in and adjust focus and get bokeh if you shoot with the aperture wide open even though the camera is physically aimed at a flat object (those systems cost hundreds of thousands of dollars, and take hours to capture an image, but the effect is truly like looking through a window as all visible angles are captured.

 

bit_user

Polypheme
Ambassador
Ironically, this focus-after-shoot is one of the less interesting features of light-field cameras. These sensors are every bit as much of a revolutionary change to photography as the video camera was.

It will take time for manufacturing technology, processing power, and storage technology to scale to the point where the true potential of light-field imagery can be realized.
 

bit_user

Polypheme
Ambassador
[citation][nom]Razor512[/nom]they measure their depth in in their own made up term of light fields[/citation]
The term "light field" certainly wasn't invented by Lytro. The only reason their camera is so limited is due to the physical size of the sensor. A larger sensor (or more sensors) will enable a deeper depth of field. That's the beauty of light-field photography - it scales!!
 

bit_user

Polypheme
Ambassador
[citation][nom]InvalidError[/nom]The sensor itself is the same old CMOS or CCD technology. Hardware-wise, the only thing they do is slap a fancy compound fixed lens in front of it.[/citation]If it were so simple, why could no one do it before Lytro? It's certainly not for lack of interest!

I think you might be minimizing the technical challenges involved in fabrication of the micro lens array. To be sure, processing horsepower efficient enough to preprocess & compress this data on-the-fly, in a portable form factor, was also a gating factor.
 

bit_user

Polypheme
Ambassador
Okay, so I hinted at the potential of lightfield photography. Let me drop a few clues.

Imagine the entire back of your smart phone is an image sensor. Eventually, sensors of this size will have greater low-light sensitivity than military-grade night vision, greater telephoto range than zoom lenses costing tens of thousands of dollars, and 3D depth extraction/scanning capabilities rivaling that of lidar scanners. Not only will you be able to focus-after-shoot, but zoom and pan within the light field, as well.

And because it can scale in a modular fashion (i.e. by adding more sensors to the array), cost of large sensors will increase linearly. Unlike bulky conventional lenses, which quickly become big, heavy, and unaffordable (since they must be made ever more precisely) as you try to scale them up.

That's where this is all headed. Now, marry that to your favorite robotics application or augmented reality setup and revolutionary starts to seem like an understatement.
 
[citation][nom]bit_user[/nom]Okay, so I hinted at the potential of lightfield photography. Let me drop a few clues.
[...]
Not only will you be able to focus-after-shoot, but zoom and pan within the light field, as well.[/citation]
This is the big one. Being able to focus after you shoot is in a way the least interesting potential. The light field camera basically captures the same information as a hologram (just in a different way and at much lower resolutions with current sensor technology). By focusing with the light field from sides of the "lens", you can reconstruct a 3D view of the original scene. The 3D-ness is limited to the width of the "lens" (i.e. panning is limited from one edge of the viewing aperture to the other). But the info recorded by the sensor can be processed to give you a 3D camera.
 

annymmo

Distinguished
Apr 7, 2009
351
3
18,785
The nice thing about light field cameras is that with light field screens we could get true depth imaging.
Like 3d but actually working. This will also help with natural images to not ruin people's eyes like current screens do. Because of these health benefits we should pursue this technology.
 
Status
Not open for further replies.