How Oppo is Faking 50MP Photos with a 13MP Lens

Status
Not open for further replies.

bustapr

Distinguished
Jan 23, 2009
1,613
0
19,780
I hate to think how fast the battery would be sucked dry if the phone takes 4 pictures at a time and fuses them. It better have a damn nice battery.
 

Blazer1985

Honorable
May 21, 2012
206
0
10,690
13mp sensor, definetly not lens.This technology is available since nokia 6600 and didn't get much applications for lots of reasons imho.
 

InvalidError

Titan
Moderator

teh_chem

Honorable
Jun 20, 2012
902
0
11,010
What in the world is a 13MP lens?Regardless, how does this produce pictures of better optical quality. What does taking the same frame 10x in a row with the same optics and pixels do for zooming/cropping? It doesn't make sense. It's the same frame, with the same number of pixels. I don't understand what you can "stitch" at that point. It's not like you're making a panorama. It's just the same picture. Also, at 10MB for a "50MP" camera, the compression seems a bit high, wonder what the dynamic range/noise is for such pictures.
 
My only guess is that the aperture can move(pivot) to get some extra coverage? Not sure what they would need to do to ensure the sensor gets all the light with this, hell maybe move the entire thing.

Sure hope so.
 

razor512

Distinguished
Jun 16, 2007
2,159
87
19,890
That is interpolation as they are scaling the image up. increasing megapixel count requires the sensor to shift in multiple directions in order to allow the camera to better analyze the light that is coming through the lens, and thus get more detail. Since pixels are larger than a photon of light, when you take a photo, you will end up with multiple units of detail hitting a single pixel on the sensor (e.g., 2 grains of sand on the beach close enough that their light photos both hit the same pixel, and thus you can see both grains in the final image. by moving The sensor around and allowing those pixels to sweep across the stream of photons, the camera can do some processing inn order to effectively increase the resolution.

This is a function found on some expensive medium format cameras in the $30,000+ price range. (for it to work the camera has to be perfectly still, any movement in any direction greater than the length of a pixel on the sensor, will ruin the process entirely)

With a fixed sensor, the most you can do is take multiple frames, and then stack them, then take the men of each pixel. This improves detail and color accuracy by effectively improving the signal to noise ratio.

You can do this manually in photoshop by setting your DSLR to lock the mirror up to prevent any movement, then take like 10 photos of the same object.

then bring the images into photoshop, then stack them all as a smart object, then change the stack mode to mean.

Since the noise and other unwanted elements in an image, are random, but with in a certain number of standard deviations of what the pixel should be, too the more images you stack, the closer you can get to the true value of the pixel, and pretty much get to a noise free image with better color and more detail. While this will allow you to enlarge am image further, it is not increasing the resolution of the image, it is just bring out more detail in the resolution. for example a 13 megapixel image from a smartphone will have less detail than a 12 megapixel DSLR.

No amount of stacking in this method will get you more detail than what a perfect 13 megapixel can give, but the more you stack, the closer you get to having detail that matches the number of pixels. (at least until you hit hit the limits of the lens

(for many cheaper cameras, the the imperfections in the lens, are larger than the pixels on the sensor, and thus you may end up with a camera and lens combo where the sensor may be 13 megapixel, but the lens may only be able to let through 6-8 megapixels of actual detail, in which case, no amount of stacking will get around that issue (and you cannot move the camera as that will change perspective)
 

InvalidError

Titan
Moderator

With image processing algorithms improving as more processing power becomes available, perfect stillness is not really necessary anymore; only still enough that images can be correlated to each other so algorithms can apply motion compensation. A few posts earlier, I posted a link to a Slashdot article about people who managed to put together an algorithm that can reconstruct images through a diffuse lens and scattered light reflections on walls. With that degree of sophistication, it becomes difficult to imagine how far image processing might go.
 

TwoSpoons100

Distinguished
Mar 19, 2014
36
10
18,535
The lens quality is at least, if not more, important than pixel count. 5MP with a good lens will deliver a better photo than 10MP with a crap one.
 

InvalidError

Titan
Moderator

Most optical aberrations can be fixed with post-processing. Image processing as progressed to the point it is becoming possible to reconstruct images from diffuse reflections and projections using image sensors... https://medium.com/the-physics-arxiv-blog/7d85673ffb41

Cameras may not need conventional optical lens if that sort of image processing gets perfected.
 

InvalidError

Titan
Moderator

Who knows.

The article I linked suggests that any translucent surface such as frosted glass could eventually be used as a lens. When you reach the degree of image processing where you can bring a seemingly shapeless (diffused) blur back into focus, correcting lens defects should be child's play. The article I linked said they managed to reconstruct an image of things hidden behind a chicken breast. If their algorithm can use odd materials like flesh as a lens, imagine how much farther the technique might go with more processing power, raw image access (instead of JPEGs), more tweaking, multiple exposures, etc.
 

timon_tablet

Honorable
May 15, 2013
24
0
10,510
Some idiots are always imagining most of familial consumers would need much more pixels. In fact, most of familial consumers have a 10MP already enough, and the problems are makers how to improve the imaging quality of their phonecam, and not frantic to catch up a 50MP photo.Higher pixels which Is not equivalent to the imaging quality higher For high-level photography lover they will always ignore a 50MP photo from a phonecam.
 

All about sensor quality, but truth be told. This is exactly what consumers want. And when these 50megapixel images are scaled down on Facebook/ect, they will be happy because it will remove lots of noise(noise is a killer on cell phone cameras for sure).

I mean they would have made TV's with local dimming(full not this edge lit idea) at the cost of thin(still not any thicker than most CFL lit LCDs), but consumers want 2 things.

1. Thin screens(not too much concern over black levels or real contrast ratio)
2. Lower prices(full grind local dimming cost money).

Real shame here is even theaters seem to be running limited range(16-235 instead of 0-255) now days(no more black blacks and dimmer brights as well).
 

InvalidError

Titan
Moderator

The ultimate form of "local dimming" is local lighting in the form of emissive display technologies like OLED.

I only wish research on bringing the cost of OLED down would be faster. In theory, it should be possible to simply print most OLED components instead of going through the vacuum metal deposition and etching process for LCDs, which would make OLED panels just a bit more expensive than printing on plastic.
 

ikmalshamin

Reputable
Mar 24, 2014
1
0
4,510


 
Status
Not open for further replies.