IBM Files Patent to Resize Images Without Artifacts

Status
Not open for further replies.

lahawzel

Distinguished
Dec 16, 2011
105
0
18,680
"Much more elaborate methods, such as fractal analysis, require enormous computing resources that are generally not practical in consumer applications."

Isn't this exactly what Perfect Resize does? I have it as a Photoshop plugin and I am able to upscale images to many times their original size and maintain sharp edges with no artifacting. It's pretty fast, too.

That being said, props to IBM for continuing to actually innovate and push for technological advancement in this day and age. Unlike a certain company that wastes resources patenting ultra-wide touchpads.
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310
I wonder if they got this idea from CSI. They seem to be able to pick up a reflection off of someone's eyeball from a camera phone 3 blocks away and just click a couple of buttons and get your license plate #. So Life imitating art I guess.
 

freggo

Distinguished
Nov 22, 2008
2,019
0
19,780
Could there be a Photoshop plugin in the future?
I'd have a use for this for a number of daily tasks if it
indeed works better than current resizing methods.
 

Parsian

Distinguished
Apr 28, 2007
774
0
18,980
there is so much one can learn from noise that why noise is so interesting, you can pull a lot of hidden information from noise.
 

lahawzel

Distinguished
Dec 16, 2011
105
0
18,680
[citation][nom]shin0bi272[/nom]I wonder if they got this idea from CSI. They seem to be able to pick up a reflection off of someone's eyeball from a camera phone 3 blocks away and just click a couple of buttons and get your license plate #. So Life imitating art I guess.[/citation]

Image upscaling algorithms, no matter how good they get, cannot produce information that is not in the original material. If the entire license plate is only 30-ish pixels in area in the photograph you are processing, it's going to upscale to unintelligible mixes of color, no way around it.

It's the same with image upscaling; it'll give you a bigger version of the original image, but details not visible in the original image won't be shown in the resultant picture. The SCP-191 testing log from the SCP Foundation, despite being a humor site, explains succinctly why things like Zoom & Enchance, Uncrop, and Rotate Camera don't work.
 

freggo

Distinguished
Nov 22, 2008
2,019
0
19,780
[citation][nom]shin0bi272[/nom]I wonder if they got this idea from CSI. They seem to be able to pick up a reflection off of someone's eyeball from a camera phone 3 blocks away and just click a couple of buttons and get your license plate #. So Life imitating art I guess.[/citation]


As much as I like the CSI/NCIS type shows (having worked in a LAB many years ago myself) but their frequent use of the image 'enhancement' from crappy security footage is getting a bit old.
It's like the airplane scenes where an engine sputters and the plane immediately goes into a screaming nose dive; which, as a pilot myself, I can assure you is NOT what happens :).


 

Parsian

Distinguished
Apr 28, 2007
774
0
18,980
[citation][nom]freggo[/nom]As much as I like the CSI/NCIS type shows (having worked in a LAB many years ago myself) but their frequent use of the image 'enhancement' from crappy security footage is getting a bit old.It's like the airplane scenes where an engine sputters and the plane immediately goes into a screaming nose dive; which, as a pilot myself, I can assure you is NOT what happens :).[/citation]

lol they extremely over estimate the gravity thats why the plane goes down like a delta function
 

lamorpa

Distinguished
Apr 30, 2008
1,195
0
19,280
[citation][nom]LaHawzel[/nom]Image upscaling algorithms, no matter how good they get, cannot produce information that is not in the original material...[/citation]
Humor upscaling algorithms, no matter how good they get, cannot produce comprehension that is not in the original viewer...
 

serendipiti

Distinguished
Aug 9, 2010
152
0
18,680
wonder if they are using some kind of fractal compression and decompression to the desired image size. Since fractal compression is somehow lossy (being the good part that you know the amount of loss) seems ironic that an aproximation is the more acurate way (like series in maths?)
 

jamie_1318

Distinguished
Jun 25, 2010
188
0
18,710
[citation][nom]lamorpa[/nom]Humor upscaling algorithms, no matter how good they get, cannot produce comprehension that is not in the original viewer...[/citation]

I think this comment applies to you more than the person you are quoting. He is absolutely correct that you can't get information out that doesn't go in.

Basically what that means is that there are hard real limits on how much any image can be enhanced, and information that did not make it into the image sensor cannot be extrapolated accurately.

Is better image enlargement a good thing: of course it is. But most people vastly overestimate the amount of quality you can extrapolate from a picture. Once you work with the kind of pictures most people would want to use this on you will agree with me. Even humans have a really difficult time re-sampling some of this stuff and the obvious problem of artifacts getting enlarged will still exist.

EDIT, went back and read my quotation. Totally misread that.
 

lamorpa

Distinguished
Apr 30, 2008
1,195
0
19,280
[citation][nom]jamie_1318[/nom]I think this comment applies to you more than the person you are quoting. He is absolutely correct that you can't get information out that doesn't go in...[/citation]
Um, LaHawzel was replying to shin0bi272's comment. LaHawzel was making a serious 'correction' to shin0bi272's joke comment, so I parodied it. Having to explain this means the missing-the-humor applies double to you. :-D
 
G

Guest

Guest
Bad article title.....

You mean "IBM Files Patent to Resize Images With Less Artifacts"

Producing a perfect scaled up image is impossible. You can predict information that is not there, but you can not ever be certain you've done it perfectly.

Ill give you an example. Say you have 10 pixels of data in a real sample and an image created from that sample, say a photograph contains 1 pixel for every 10. In the real image you have 9 pixels of perfect blue with a 1 pixel wide red line going through the center of it. The image will have 1 pixel of slightly purple blue.

No matter what you do with 1 pixel of slightly blue you will NEVER get that red line back. Because you can never be certain if the original image had 1 pixel of red and 9 pixels of blue, or maybe it had 10 pixels of slightly purple blue, or maybe it had 5 pixels of slightly less purple blue and 5 pixels of slightly more purple blue, etc. All of those will combine into the same single color pixel in your source.

In this example there can be hundreds of millions of possible combinations of original data, all you can do is pick one, and assume its one pixel of slightly purple blue and scale it up....
 

koga73

Distinguished
Jan 23, 2008
405
0
18,780
not sure how this would work if you scaled the image different amounts in the x and y then your resulting image wouldn't be the same aspect ratio.
 

lamorpa

Distinguished
Apr 30, 2008
1,195
0
19,280
[citation][nom]koga73[/nom]not sure how this would work if you scaled the image different amounts in the x and y then your resulting image wouldn't be the same aspect ratio.[/citation]
Oh yeah. I guess they missed that after years of research. It's not like you misunderstood what was being stated or anything. Back to the drawing board for IBM.
 

rumandcoke

Honorable
Feb 28, 2012
34
0
10,530
patent invalid due to their choice of language.

without artifacts - there is no such thing
what is being described is a better upscaling.

i suppose since patents don't actually require products to be produced they can use this patent to troll other companies who advertise artifact free upscaling.
 

freggo

Distinguished
Nov 22, 2008
2,019
0
19,780
[citation][nom]jamie_1318[/nom]...He is absolutely correct that you can't get information out that doesn't go in.Basically what that means is that there are hard real limits on how much any image can be enhanced, and information that did not make it into the image sensor cannot be extrapolated accurately...[/citation]

There is one bit of fine print to this that should be included.
There is a lot of 'information' in an image that is not visually available to the human eye.
For example; many photos of night skies look like a black picture with a few blinking starts, but if you run them thru Photoshop with an HDR filter and more things become visible.

What I am trying to say is that you can not 'recover' information that is simply not there, but you can make visible information that -due to it's nature- is not visible to the human eye in it's raw format.

Still, Abby Sciuto performs a few miracles in her lab; and all in a time frame that has little to do with reality :)

 

bit_user

Polypheme
Ambassador
> nearest neighbor, bilinear or bicubic resizing

LOL.

Anyone who's taken a signal processing class or read a little about it knows that sinc is the theoritically ideal way to resample a band-limited signal.

In the real world, a Lanczos-windowed sinc tends to work best on digital images.

Most image scaling filters are separable, meaning they can be implemented as two 1D filters. This dramatically reduces the amount of computation needed, especially for large kernels.

None of this is even remotely new. More details about IBM's patent would be needed to see what, if anything, is novel.
 

rodamar

Honorable
May 4, 2012
1
0
10,510
All of the above comments seem to ignore the way we actually see. It is said (googling is recommended here) that, for example, a misspelled word can nevertheless be effortlessly read and even the misspelling thereof blissly overseen as long as the first and the last letter of its spelling be correct. The context gives us the needed info to understand correctly anyways.

Moreover, captcha demonstrates that even well deformed letters can be nevertheless correctly interpreted and successfully correctly read. Dotting the lines etc. and other phenomena included, of course it ought to be possible given the needed algorithms to plausibly extract much more info out of a photo than be merely physically present as pixels.

Might not such heuristic be going to be brought into play?
 
G

Guest

Guest
Resizing/Up-scaling images processing could learn a few things from the advances made in voice- and
text-recognition areas.

One needs to combine multiple processing methods together with occasional human assistance along the
way. It's a dynamic iterative processes that still needs a human to occasionally answer the question "Um,
we aren't sure what to do here...is the next letter/word/sound supposed to be [a, b, c, d....].

On can find some examples of this in very limited use in the application of filters in image processing
software. When the user, selects, say, where, during processing the software presents a matrix of possible results, and asks the user to choose which one looks the most accurate.

What we currently need is software that is good at identifying key image details it needs to ask humans for
help with. Ultimately, patterns, rules and dictionaries need to be built-up, integrated and successfully
brought into play just like it has been done with voice and text.

A typical scenario will ask an acceptable number of questions of the user during the processing phase
en-route to a result the user will find acceptable. The software will ask for help choosing accurate
identification of details concerning pixels, shapes, objects or context in relation to information pre-
identified with confidence from data embedded in the image itself , data determined with confidence by the
processing software, or determined with prior user input earlier in the processing.

It would be great if image processing could magically figure-out what we were looking at and produce
sharper & clearer images on its own. However we are not at the point yet were developers have identified
sufficient patterns, rules and logic for that to happen.

I'm not sure it can ever occur in one monolithic stand-along package with no input from anyone, or eve
from another system in real-time. For example, the processing software could, with the help of a person,
if necessary, concur that a part of the image being processed includes a license-plate. However, when
determining if one of the letters is either an "E" or an "F", the processing-software many have to leverage
external online area-specific databases.

For example, I assume that the photo-recognition software used by well-equipped law-enforcement agencies can make out the letters on a license-plate from an image taken from a video still. In would be a lot easier to do this if they dynamically submit some pre-determined attributes of the image: where & when an image might have been taken, type
of car, state license plate was issues (via color or VIN, etc.)....then logic pre-compiled or accessed in real-
time from individual US states would be consulted to determine most likely letter-number combination applies to the plate in the photo. This would be very similar to how credit-card transactions are verified on-line.
 

guardianangel42

Distinguished
Jan 18, 2010
554
0
18,990
[citation][nom]Anonymous[/nom]Bad article title.....You mean "IBM Files Patent to Resize Images With Less Artifacts"Producing a perfect scaled up image is impossible. You can predict information that is not there, but you can not ever be certain you've done it perfectly. Ill give you an example. Say you have 10 pixels of data in a real sample and an image created from that sample, say a photograph contains 1 pixel for every 10. In the real image you have 9 pixels of perfect blue with a 1 pixel wide red line going through the center of it. The image will have 1 pixel of slightly purple blue. No matter what you do with 1 pixel of slightly blue you will NEVER get that red line back. Because you can never be certain if the original image had 1 pixel of red and 9 pixels of blue, or maybe it had 10 pixels of slightly purple blue, or maybe it had 5 pixels of slightly less purple blue and 5 pixels of slightly more purple blue, etc. All of those will combine into the same single color pixel in your source. In this example there can be hundreds of millions of possible combinations of original data, all you can do is pick one, and assume its one pixel of slightly purple blue and scale it up....[/citation]

I'm no expert, but theoretically an algorithm could calculate the percentage of color change and compare it to the surrounding colors and based on the gradient within the image, determine that there had in fact been a red line there.

It may not be perfect, but given the fact that light wavelengths are numerical, and thus can be calculated mathematically, it seems that given the right technique, it should be possible to get a fairly high degree of reliability.

Of course, that's just with your extremely simplified example. I'm sure it's pretty damn complex when attempting to apply it to a more sophisticated image.
 
Status
Not open for further replies.