[SOLVED] Open discussion (Photo editing): I have a dream

Status
Not open for further replies.

ch33r

Distinguished
BANNED
Jun 13, 2010
316
4
18,685
I apologize in advance if this is in the wrong 🐈egory. This is an open discussion about two dreams I have in relation to software.

I dream of the day we can take a photo of something, and regardless of the resolution of the original camera that took the photo, you can zoom in, and keep zooming in, and keep zooming in with no blur/pixelating. That is: I want to be able to take a picture of a pop bottle with your standard everyday $200 camera from 200 feet away, and zoom in close enough to read the ingredients on the bottle. In fact, no..... I want to be able to keep zooming in on that picture to the point where I can see the atoms that make up the bottle. I want someone to write a program that will allow you to always zoom in as far as you want regardless of original cameras resolution with no pixelation and no blur. *looks at Topaz Gigapixel AI* Well, at least someone's trying

#2) I want to be able to remove individual objects from a photo taken with your average everyday camera, and see what was behind the object, instead of just a white square. For example, remember that pop bottle in the first one? Now that I'm done looking at the atoms, I want to remove the pop bottle, (ONLY the pop bottle), and see the bottle cap that was sitting on the surface behind it. Another example: I want to go outside, take a picture of my car sitting on the driveway, put the image onto my PC, then remove/edit out the car (ONLY the car) and see the whole driveway, including the chalk drawings that were done on it the night before, and the hockey puck that I left on the driveway that was consequently under the car. I want someone to make a program that allows this to be possible

When someone makes these two programs, my life will be complete, and I will be ready for the afterlife..... if there is one

Thoughts, comments, and discussions below. All input welcome. Keep is clean and appropriate please :)
 
Last edited:
Solution
Well, #1 isnt a software issue, thats a hardware problem. Its not like some CSI movie where they can "enhance" as you move in. Yeah sure you can try, and it can extrapolate what information it thinks is there, but its never going to be correct.
Additionally, higher MP cameras really arent necessary outside of very specific uses (like big billboard/banner prints). Thats why my Canon rebel T6 that costs 400 bucks is near the same MP count as a Sony A7III. In fact, once you hit a certain MP count you would simply not be able to take decent pictures, the pixels would be too small to take in enough light. When you compare a Sony A7III to the A7rIV (Just MP count here) the A7III takes better low light images.

#2 You can do that now...
Well, #1 isnt a software issue, thats a hardware problem. Its not like some CSI movie where they can "enhance" as you move in. Yeah sure you can try, and it can extrapolate what information it thinks is there, but its never going to be correct.
Additionally, higher MP cameras really arent necessary outside of very specific uses (like big billboard/banner prints). Thats why my Canon rebel T6 that costs 400 bucks is near the same MP count as a Sony A7III. In fact, once you hit a certain MP count you would simply not be able to take decent pictures, the pixels would be too small to take in enough light. When you compare a Sony A7III to the A7rIV (Just MP count here) the A7III takes better low light images.

#2 You can do that now, take one picture with the object and one without and overlay them.
Jokes aside, thats not really something you could ever possibly make.
 
Solution

ch33r

Distinguished
BANNED
Jun 13, 2010
316
4
18,685
Well, #1 isnt a software issue, thats a hardware problem. Its not like some CSI movie where they can "enhance" as you move in. Yeah sure you can try, and it can extrapolate what information it thinks is there, but its never going to be correct.
Additionally, higher MP cameras really arent necessary outside of very specific uses (like big billboard/banner prints). Thats why my Canon rebel T6 that costs 400 bucks is near the same MP count as a Sony A7III. In fact, once you hit a certain MP count you would simply not be able to take decent pictures, the pixels would be too small to take in enough light. When you compare a Sony A7III to the A7rIV (Just MP count here) the A7III takes better low light images.

#2 You can do that now, take one picture with the object and one without and overlay them.
Jokes aside, thats not really something you could ever possibly make.

Thanks for the reply. To your response to #1.... I didn't know that about megapixels. What's the cap anyway?

About #2. I know you can do that lol. .... I just want to take ONE picture, uploaded it, remove the pop bottle and see the bottle cap, or remove the car and see the puck/chalk drawings
 

ch33r

Distinguished
BANNED
Jun 13, 2010
316
4
18,685
#1 is mostly theoretical, I dont know what that cap would eventually be.
We have 61MP consumer cameras on the market already though, and they do just fine.

But I want to take a photo, regardless of resolution, and zoom in far enough to see the atoms that make up the bottle. I know about AI-upscaling. But I want to just keep zooming in and zooming in with no blur.
 

USAFRet

Titan
Moderator
I have a Fuji X-T1, and a variety of lenses.
I've taken reasonably clear pics of a crater on the moon, or a fly's eyeball.

For your #2, regarding what is 'behind' something?
Photoshop and Paintshop Pro have 'context aware fill. It can somewhat remove something and make a guess as to what would have been there, and fill in.
It absolutely cannot actually SEE what is there.
 

ch33r

Distinguished
BANNED
Jun 13, 2010
316
4
18,685
Physics, literally, prevents this.

Please...show us a "picture" of an atom. From any 'camera'.

Well that's the problem. No one wrote the program. I want to be able to take a photo with my phone, and zoom in 200 feet to the pop bottle and count the atoms. I mean, sure it seems pretty far-fetched. But maybe that's why I call it a dream.... Loooooool. Look at Topaz Gigapixel AI..... I.... I mean ... Like.... I mean at least someone's trying
 

ch33r

Distinguished
BANNED
Jun 13, 2010
316
4
18,685
I have a Fuji X-T1, and a variety of lenses.
I've taken reasonably clear pics of a crater on the moon, or a fly's eyeball.

For your #2, regarding what is 'behind' something?
Photoshop and Paintshop Pro have 'context aware fill. It can somewhat remove something and make a guess as to what would have been there, and fill in.
It absolutely cannot actually SEE what is there.

Right. I want to zoom in far enough on that picture of the moons crater that I can see the atoms that make up the moon.

It can "make a guess" as to what's behind there. I want someone to write a program that doesn't make a guess, but that actually displays what's behind it
 

USAFRet

Titan
Moderator
Well that's the problem. No one wrote the program. I want to be able to take a photo with my phone, and zoom in 200 feet to the pop bottle and count the atoms. I mean, sure it seems pretty far-fetched. But maybe that's why I call it a dream.... Loooooool. Look at Topaz Gigapixel AI..... I.... I mean ... Like.... I mean at least someone's trying
Seeing a literal atom is not software, but physics.

Yes, I've investigated the various gigapixel solutions.
Trying to talk myself into purchasing a Gigapan device.
 

ch33r

Distinguished
BANNED
Jun 13, 2010
316
4
18,685
Seeing a literal atom is not software, but physics.

Yes, I've investigated the various gigapixel solutions.
Trying to talk myself into purchasing a Gigapan device.

But even on that you can only zoom in so far before you get pixelation. I want to keep zooming in. I want to zoom in to the rocks on the roof of one of those builds and see all the atoms. Then I want to remove one of the rocks and see.the rock behind it
 
#1: In some very distant future, when terapixel cameras are in your pocket, you might probably read what written on your bottle from 200 feet. Again - it's physics, not software.
#2: The light (the property used in photography) has this strange property of traveling in straight lines (for objects which are pretty close) to you. There's no information in the photo you've taken about these chalk lines obstructed by the car while you took the picture. How a software would know that there are chalk lines and not a dead squirrel behind the car?
 

Ralston18

Titan
Moderator
Wikipedia:


British science fiction writer Arthur C. Clarke formulated three adages that are known as Clarke's three laws, of which the third law is the best known and most widely cited. They were part of his ideas in his extensive writings about the future.[1] These so-called laws include:
  1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Any sufficiently advanced technology is indistinguishable from magic.
#3 is my favorite.....

PS:

:)
 

USAFRet

Titan
Moderator
Regarding your "bottle label at 200 feet "
This is about 120 feet.

OEJOmkE.png


Fuji X-T1
https://www.dpreview.com/reviews/fujifilm-x-t1

50-230mm zoom
https://www.dpreview.com/products/fujifilm/lenses/fujifilm_xc_50-230

35mm f1.4
https://www.dpreview.com/products/fujifilm/lenses/fujifilm_xf_35mm


Adobe Lightroom 5.7 to process the RAW images with a tiny bit of touchup.
Paintshop Pro 2019 to compile the jpg output from Lightroom and make a single .png image to display here.


Mostly default camera settings and autofocus.
If I desired to spend more time, I probably could have gotten it a little bit better. I was not sufficiently motivated to do so.
 

britechguy

Commendable
Jul 2, 2019
1,479
243
1,340
We are not sufficiently "magicked up" to see an atom with anything we conventionally think of as a camera.

With regard to "removing thing X and seeing thing Y that was behind it," well, we can't do that with our eyes nor can a camera do so with a lens. It can only detect what has light reflecting back (or emanating from) that can reach the aperture that does the capturing. I'm doubting the sci-fi X-ray vision that allows the stripping away of extraneous layers in front of it is in the cards for the future, either, if we're talking about something that allows us (or a camera) to see multiple planes behind opaque material. As has been mentioned, it defies the laws of physics.
 
I bet aliens some 237.61e+25 light years away have a program that does both those things

No program will be able to tell what is behind something unless you remove what is in the way, take a picture, then put the other item back. What you want is called "reality" not "photography". You are talking about some VR setup where you can manipulate objects, to do that you need to model every object you want to manipulate.

There is actually some thought about this, what type of setup would you need to emulate the universe, or even a small part of it. Meaning, you go into a program and it loads a city for you. You can touch and move everything in that city. That means every dog, every building, every windows, every rock, every bug, every wind particle, every coffee spill will need to be coded in to be manipulated. That will take so much data and computing power that you may as well just think of it as magic. I think the final theoretical physics conclusion that to model the universe where you can actually travel virtually through it along with moving things out of your way like you want,you need something the size and complexity of the universe or better.

Soon as you can create a computer system the size of the universe, you will have what you want.

For a drop of water to be modeled, you have this:
Let's use the volume of a water drop that is used by the medical and scientific community. The accepted average volume of a drop of water is exactly 0.05 mL (20 drops per milliliter). It turns out there are over 1.5 sextillion molecules in a drop of water and more than 5 sextillion atoms per droplet.

So to model one drop of water in the way you want it, you will need to model those 1.5 sextillion molecules (lets forget about atoms for now to make things simpler) and their interaction to each other in a program. Bring that up to a computer scientist when you run into one and ask them how easy that is to do. That is ONE DROP OF WATER. You want to model the moon which is quite a bit larger.

You need to read a bit about science, then you will be able to ask this question a bit more properly.
 
Last edited:
When you are done with your experiment, send me that bottle so I can do my own testing on it, in the interests of science. And a bottle opener with it please :)

Regarding your "bottle label at 200 feet "
This is about 120 feet.

OEJOmkE.png


Fuji X-T1
https://www.dpreview.com/reviews/fujifilm-x-t1

50-230mm zoom
https://www.dpreview.com/products/fujifilm/lenses/fujifilm_xc_50-230

35mm f1.4
https://www.dpreview.com/products/fujifilm/lenses/fujifilm_xf_35mm


Adobe Lightroom 5.7 to process the RAW images with a tiny bit of touchup.
Paintshop Pro 2019 to compile the jpg output from Lightroom and make a single .png image to display here.


Mostly default camera settings and autofocus.
If I desired to spend more time, I probably could have gotten it a little bit better. I was not sufficiently motivated to do so.
 
  • Like
Reactions: NightHawkRMX
Status
Not open for further replies.