The human eye doesn't Perceive resolution the way a display presents it. The eye perceives Pixel Density, which is reliant on the size of the "Screen" which the pixels are a part of. The Density of a 1080p 24 inch display will be much higher than on a 60 inch display. The eye also can register up to around 48 "FPS" if you wish to call it that. The trick to all this isn't what the human eye can register though, as the eye could register a "Frame" that has happened between the frames of the display, Our brain can also register much more than our eye can, and even when you don't "See" it, you will know it is there. The idea of VR isn't to trick the eyes, but to trick the brain. This presents a much larger problem, as you could imagine.
Came here to just say that. And, to me it would have been funny even without any /s at the end, assuming joking without needing to make it obvious. I am saddened in fact by the amount of people responding to it seriously. I have higher hopes for the communities I hang around.
Incorrect on both accounts, even if you were being sarcastic (which is how a lot misinformation spreads)
That was both great and sad to see all the reaction at the same time.
resolution depends on distance, but i can sure as hell tell the difference between 30 and 60fps, hell for that matter, i am able to tell the difference between 55fps and 60fps
16k is allot further out than you realize, at least for a consumer anything.While a 16k resolution is not THAT far away... the ability to (in realtime) calculate and pump 130MP at 60+fps with photorealistic quality IS a long way away. No... the solution isn't to just continue developing vr exactly like we developed traditional displays... the solution is dynamic rendering. Once reliable and FAST eye tracking is added to a vr headset you can start rendering at high density only there the eye can actually SEE high density (where the fovea is pointed), once you get maybe 10-20 degrees off that point you can dramatically simplify the rendering to only render things that the eye sees off-axis. The GPU workload as well as the display bandwidth required would be a fraction required for a full render (too lazy to calculate but I'd guess 5-10% or so). It's quite possible that todays high end graphics cards could handle the load... so lets figure out how to calculate eye position... I want my full fov vr headset soon!
with your example of print media... no, text wise its not noticeable, but image wise it is, but thats more because when people see a picture they really like they will bring it closer to their faces to look at it better, something you dont do with text.darkokills :But the human eye can't see above 720p and 30FPS /s.
While the human eye may not be able to discern individual pixels beyond a certain pixel density given a specific distance, the improvement in sharpness is still perceivable a fair bit beyond that.
If you print text at 300dpi and 600dpi using a laser printer, the sharper cleaner text at 600dpi should be fairly easy to identify.
Is the massive increase in compute power required really worth the small improvement in image sharpness? I doubt many people are going to be willing to spend 4X as much on compute power to gain maybe 10% in perceived image quality.
the 90fps helps with motion sicknessWe dont need 90 fps but we certainly need very low latency. As long as input from human till frame rendered to screen is 20 ms thats fine. If they mean 20ms input lag then the frame render lag, then the frame display lag(dont even get me started on double/tripple buffering), then thats way way too slow.
you dont need a super computer, you just need a way to output the data... the most ghetto way to do this would be sync up (a fury x has 3 display ports capable of 4k 60fps if im reading correctly, 4 4k monitors = 1 8k and 4 8k = 1 16k so with that math 6 fury x could output a 16k signal worth of data by splitting it up into 16 4k videos) with 2 computers with 6 fury X between them, or if a single computer couldn't handle 9 4k 60fps video, than 16 computers with whatever is powerfull enough to run a 16k video and a way to sync them up.But no matter how you look at it we really do have unlimited GPU power. Today's super computers can easily render 16k.
It'll be a long time before that's compact enough to put in a VR headset, but it's certainly a possibility.
in all honesty, i would prefer that for most games that are made right now with no vr native support... however, you ask me about a racing game, my opinion changes, you build a game ground up for vr, think of the doctor who weeping angels but in a vr game, there is one like this thats a woods with trees, and i would hands down prefer vr. hell with a high enough resolution vr setup, you could emulate the room you are sitting in, and make it look like you are sitting at a desk with 3 monitors, and play a game inside of that, i honestly think that would be the best use of vr when playing a traditional game because it doesn't add the motion controls to the game itself.I prefer the practical application of Eye-finity over VR goggles. There is something nice about going into a room where each wall is projecting an image of the environment.
the way motion blur works in video games is not the same way it works in movies, lets say that you have a game that runs between 60 and 120 fps and you have motion blur on, if that runs at 120 without slowing down, it wont look bad, but the moment you get a dip in the game speed and something falls below 120fps you now lose the illusion the motion blur was adding.112 million pixels actually sounds right considering a full 120x135 field of view: All the area the eyes can rotate to and cover.
Tough I believe for the first generation devices, 2k per eye will be more than enough to give a VERY nice looking image, in a couple of years, 8k, which gives 33 million pixels will be the sweet spot.
I also belive 90hz is overkill. 72hz should be more than enough if issues with motion blur are fixed, What is more important for movement precision, isn´t pure refresh rate, but reducing pixel persistence. This actually present a problem for maximum brightness on OLED displays, tough.
Back to the high resolution issue, the solution is quite obvious and the developers of FOVE vr already are onto it: EYE tracking and adaptive resolution rendering: Only render the portion at which the eyes are looking at. I'm quite dissapointed that only fove has officialy invested on eye tracking technology so far and Oculus have neglected it. Eye tracking is also ESSENTIAL for depth of field simulation and other very important forms of immersion in the full experience of VR. I really whish FOVE succeed on it's kickstarter and grab a significant share of the VR market when it's released to force Oculus to implement VR on it's second generation device. I will definitely buy a fove headset but not a oculus if it doesn´t come with eyetracking.
if you spend any time online, you will know we have to deal with people who really think that on a daily basis and then spreading it to other people creates the masses thinking 24fps is all you need for a game because its cinematic... my brother has a game where the devs used that as an excuse and never optimized the game so it runs like hell regardless of anything you do.