AMD Determines That Absolute Immersion In VR Requires 128 Megapixel Display

Status
Not open for further replies.

d_kuhn

Distinguished
Mar 26, 2002
704
0
18,990
While a 16k resolution is not THAT far away... the ability to (in realtime) calculate and pump 130MP at 60+fps with photorealistic quality IS a long way away. No... the solution isn't to just continue developing vr exactly like we developed traditional displays... the solution is dynamic rendering. Once reliable and FAST eye tracking is added to a vr headset you can start rendering at high density only there the eye can actually SEE high density (where the fovea is pointed), once you get maybe 10-20 degrees off that point you can dramatically simplify the rendering to only render things that the eye sees off-axis. The GPU workload as well as the display bandwidth required would be a fraction required for a full render (too lazy to calculate but I'd guess 5-10% or so). It's quite possible that todays high end graphics cards could handle the load... so lets figure out how to calculate eye position... I want my full fov vr headset soon!
 

InvalidError

Titan
Moderator

While the human eye may not be able to discern individual pixels beyond a certain pixel density given a specific distance, the improvement in sharpness is still perceivable a fair bit beyond that.

If you print text at 300dpi and 600dpi using a laser printer, the sharper cleaner text at 600dpi should be fairly easy to identify.

Is the massive increase in compute power required really worth the small improvement in image sharpness? I doubt many people are going to be willing to spend 4X as much on compute power to gain maybe 10% in perceived image quality.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
We dont need 90 fps but we certainly need very low latency. As long as input from human till frame rendered to screen is 20 ms thats fine. If they mean 20ms input lag then the frame render lag, then the frame display lag(dont even get me started on double/tripple buffering), then thats way way too slow.
 

Freosan

Reputable
Aug 12, 2015
2
0
4,510
But the human eye can't see above 720p and 30FPS /s.
The human eye doesn't Perceive resolution the way a display presents it. The eye perceives Pixel Density, which is reliant on the size of the "Screen" which the pixels are a part of. The Density of a 1080p 24 inch display will be much higher than on a 60 inch display. The eye also can register up to around 48 "FPS" if you wish to call it that. The trick to all this isn't what the human eye can register though, as the eye could register a "Frame" that has happened between the frames of the display, Our brain can also register much more than our eye can, and even when you don't "See" it, you will know it is there. The idea of VR isn't to trick the eyes, but to trick the brain. This presents a much larger problem, as you could imagine.
 

Blueberries

Reputable
Dec 3, 2014
572
0
5,060
But no matter how you look at it we really do have unlimited GPU power. Today's super computers can easily render 16k.

It'll be a long time before that's compact enough to put in a VR headset, but it's certainly a possibility.
 

falchard

Distinguished
Jun 13, 2008
2,360
0
19,790
I prefer the practical application of Eye-finity over VR goggles. There is something nice about going into a room where each wall is projecting an image of the environment.
 

sephirotic

Distinguished
Jan 29, 2009
67
0
18,630
112 million pixels actually sounds right considering a full 120x135 field of view: All the area the eyes can rotate to and cover.
Tough I believe for the first generation devices, 2k per eye will be more than enough to give a VERY nice looking image, in a couple of years, 8k, which gives 33 million pixels will be the sweet spot.

I also belive 90hz is overkill. 72hz should be more than enough if issues with motion blur are fixed, What is more important for movement precision, isn´t pure refresh rate, but reducing pixel persistence. This actually present a problem for maximum brightness on OLED displays, tough.

Back to the high resolution issue, the solution is quite obvious and the developers of FOVE vr already are onto it: EYE tracking and adaptive resolution rendering: Only render the portion at which the eyes are looking at. I'm quite dissapointed that only fove has officialy invested on eye tracking technology so far and Oculus have neglected it. Eye tracking is also ESSENTIAL for depth of field simulation and other very important forms of immersion in the full experience of VR. I really whish FOVE succeed on it's kickstarter and grab a significant share of the VR market when it's released to force Oculus to implement VR on it's second generation device. I will definitely buy a fove headset but not a oculus if it doesn´t come with eyetracking.
 

virtualban

Distinguished
Feb 16, 2007
1,232
0
19,280
But the human eye can't see above 720p and 30FPS /s.

I'm gonna assume the /s is actually /sarcasm. Not typing out the whole word got you downrated like crazy though which is funny.

Came here to just say that. And, to me it would have been funny even without any /s at the end, assuming joking without needing to make it obvious. I am saddened in fact by the amount of people responding to it seriously. I have higher hopes for the communities I hang around.
 

Adilaris

Reputable
Feb 16, 2015
23
0
4,510
But the human eye can't see above 720p and 30FPS /s.
Incorrect on both accounts, even if you were being sarcastic (which is how a lot misinformation spreads)
The human eye can see a way higher resolution and the "fps" is actually somewhere around 60-70 on average, depending on the person.
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
But the human eye can't see above 720p and 30FPS /s.
resolution depends on distance, but i can sure as hell tell the difference between 30 and 60fps, hell for that matter, i am able to tell the difference between 55fps and 60fps
though i assume your post is a joke, some people take it seriously.

While a 16k resolution is not THAT far away... the ability to (in realtime) calculate and pump 130MP at 60+fps with photorealistic quality IS a long way away. No... the solution isn't to just continue developing vr exactly like we developed traditional displays... the solution is dynamic rendering. Once reliable and FAST eye tracking is added to a vr headset you can start rendering at high density only there the eye can actually SEE high density (where the fovea is pointed), once you get maybe 10-20 degrees off that point you can dramatically simplify the rendering to only render things that the eye sees off-axis. The GPU workload as well as the display bandwidth required would be a fraction required for a full render (too lazy to calculate but I'd guess 5-10% or so). It's quite possible that todays high end graphics cards could handle the load... so lets figure out how to calculate eye position... I want my full fov vr headset soon!

16k is allot further out than you realize, at least for a consumer anything.
4k likely wont catch on like hd did because the difference at distance between 4k and 1080p is negligible if not non existent at normal viewing distances, while on the pc where you are normally closer to the screen, there are many people like me who wont pay money for a very expensive form of AA, and want realestate instead of sharpness.

8k is used in the medical fields already and for a long time if i remember right, but even still, 5k is going to the upper limit of professional hardware till some real 8k content comes around. but a good question about that would be this, will movie theaters even be around in 10 years? the biggest reason to push higher resolutions is because the screen they play them on is big enough that even at 16k its not sharp enough, i honestly dont think we will ever see a home application of 16k outside of very expensive home theaters, at least till the ability to process that information is so miniscule that the jump from 8 to 16 k is not felt at all.



While the human eye may not be able to discern individual pixels beyond a certain pixel density given a specific distance, the improvement in sharpness is still perceivable a fair bit beyond that.

If you print text at 300dpi and 600dpi using a laser printer, the sharper cleaner text at 600dpi should be fairly easy to identify.

Is the massive increase in compute power required really worth the small improvement in image sharpness? I doubt many people are going to be willing to spend 4X as much on compute power to gain maybe 10% in perceived image quality.

with your example of print media... no, text wise its not noticeable, but image wise it is, but thats more because when people see a picture they really like they will bring it closer to their faces to look at it better, something you dont do with text.

We dont need 90 fps but we certainly need very low latency. As long as input from human till frame rendered to screen is 20 ms thats fine. If they mean 20ms input lag then the frame render lag, then the frame display lag(dont even get me started on double/tripple buffering), then thats way way too slow.

the 90fps helps with motion sickness

But no matter how you look at it we really do have unlimited GPU power. Today's super computers can easily render 16k.

It'll be a long time before that's compact enough to put in a VR headset, but it's certainly a possibility.

you dont need a super computer, you just need a way to output the data... the most ghetto way to do this would be sync up (a fury x has 3 display ports capable of 4k 60fps if im reading correctly, 4 4k monitors = 1 8k and 4 8k = 1 16k so with that math 6 fury x could output a 16k signal worth of data by splitting it up into 16 4k videos) with 2 computers with 6 fury X between them, or if a single computer couldn't handle 9 4k 60fps video, than 16 computers with whatever is powerfull enough to run a 16k video and a way to sync them up.

that processing power would, i believe be under what we consider a supercomputer.


I prefer the practical application of Eye-finity over VR goggles. There is something nice about going into a room where each wall is projecting an image of the environment.

in all honesty, i would prefer that for most games that are made right now with no vr native support... however, you ask me about a racing game, my opinion changes, you build a game ground up for vr, think of the doctor who weeping angels but in a vr game, there is one like this thats a woods with trees, and i would hands down prefer vr. hell with a high enough resolution vr setup, you could emulate the room you are sitting in, and make it look like you are sitting at a desk with 3 monitors, and play a game inside of that, i honestly think that would be the best use of vr when playing a traditional game because it doesn't add the motion controls to the game itself.

112 million pixels actually sounds right considering a full 120x135 field of view: All the area the eyes can rotate to and cover.
Tough I believe for the first generation devices, 2k per eye will be more than enough to give a VERY nice looking image, in a couple of years, 8k, which gives 33 million pixels will be the sweet spot.

I also belive 90hz is overkill. 72hz should be more than enough if issues with motion blur are fixed, What is more important for movement precision, isn´t pure refresh rate, but reducing pixel persistence. This actually present a problem for maximum brightness on OLED displays, tough.

Back to the high resolution issue, the solution is quite obvious and the developers of FOVE vr already are onto it: EYE tracking and adaptive resolution rendering: Only render the portion at which the eyes are looking at. I'm quite dissapointed that only fove has officialy invested on eye tracking technology so far and Oculus have neglected it. Eye tracking is also ESSENTIAL for depth of field simulation and other very important forms of immersion in the full experience of VR. I really whish FOVE succeed on it's kickstarter and grab a significant share of the VR market when it's released to force Oculus to implement VR on it's second generation device. I will definitely buy a fove headset but not a oculus if it doesn´t come with eyetracking.

the way motion blur works in video games is not the same way it works in movies, lets say that you have a game that runs between 60 and 120 fps and you have motion blur on, if that runs at 120 without slowing down, it wont look bad, but the moment you get a dip in the game speed and something falls below 120fps you now lose the illusion the motion blur was adding.

granted i see motion blur as an artifact of cinema from a time when we used film and it cost real money to film that should be eliminated as fast as possible, but that's my opinion against against idiots who want to hold advancement back because "it doesn't look right"

as for the depth of field, its one of the first things i turn off in games because its annoying as hell in practice, cinematics look fine but in practice what i'm looking at in real life in game is out of focus till i move the mouse to it and than its in focus... id rather not have any dof if thats the case, and the amount of lag that would be there in vr would still be noticeable.

now you instead use that tech as a tessellation perimeter... that is where it could get interesting. a way for the game to see what you are looking at and render only that in the highest quality, while all else is rendered in 360 or ps2 level of detail ot save processing power... thats intresting.

But the human eye can't see above 720p and 30FPS /s.
That was both great and sad to see all the reaction at the same time.

I do not use reddit(unless i get linked by someone else), but even I could see the sarcasm in that statement made even better with the /s

if you spend any time online, you will know we have to deal with people who really think that on a daily basis and then spreading it to other people creates the masses thinking 24fps is all you need for a game because its cinematic... my brother has a game where the devs used that as an excuse and never optimized the game so it runs like hell regardless of anything you do.
 
All of this talk about megapixels, framerates and latency got everyone confused.
The problems are far deeper than that.
First, lets remember that for a engine to render a photo realistic environment we need perfect uniformity in all the graphics and textures.
That is something we have never seen in games.
The problem is that if you make a game with photo realistic quality, anything that will be under that quality will break your immersion instantly.

Second, we have the problem of lightning. "Principles of Lighting and Rendering with John Carmack at QuakeCon 2013" in youtube explains this far better than I will.

Then add that to the VR.... We are talking about probably 25-35 years of time required, even if everything goes right (doubtful).
And if we keep getting console ports, make that 100 years.

What we need is: Ray tracing, realistic, 110+ fps, latency under 16ms, around 15K screens, perfect color accuracy displays that are under 600 dollars...
Maybe when im 70.
 
Status
Not open for further replies.