News LG unveils world's first 540 Hz OLED monitor with 720p dual mode that can boost up to 720 Hz — Features a 27-inch 4th Gen Tandem OLED panel with QH...

Admin

Administrator
Staff member
My guess is that there are <200 humans on the planet that can benefit from >300 hz images and 0 who benefit from going from 480 to anything higher. At this point it is starting to get stupid.
Only if you stick to sample-and-hold full-panel refreshing. Increasing refresh rates allows you trend back towards what CRTs were capable of, with line-by-line impulse display, and the ability to 'chase the beam' with rendering (producing the image for that line just before the line is scanned out to the display).
Even if frame rate remains at a 'pedestrian' 120Hz, higher refresh rates allow these sorts of latency-reducing and persistence-reducing driving modes. And the great part about this being the native panel rate is that you can do this in software rather than needing an expensive panel-side processor (e.g. the original GH-Sync FPGA module).
 
  • Like
Reactions: cyrusfox
I'm more concerned about color curves of this. Yes, FPS freaks would freak even if they cannot see them, but well, who needs it if the color representation is far from normal, not talking good even closely.
 
My guess is that there are <200 humans on the planet that can benefit from >300 hz images and 0 who benefit from going from 480 to anything higher. At this point it is starting to get stupid.
It's not really about what you can see but also about things like input delay. Not that sub 10ms is something to fight about, but reducing latency to 0 is something that people will strive for.
 
It's not really about what you can see but also about things like input delay. Not that sub 10ms is something to fight about, but reducing latency to 0 is something that people will strive for.
1/60 delay is 0.016s. 1/120 delay is 0.008s. 1/540 delay is 0.0018s.
These numbers are just marketing and self-convincing. And making money from that.
Human body does not work that fast, typical eye to movement reaction time is on the scale of 0.05s-1s.
Basically, you won't notice the difference *unless* you imagine yourself doing it (placebo effect).

---

P.S.

The bottom line is, our eye-brain channel and reception/recognition has finite and pretty much measurable 'speed'. That's why we are able to have a 'stroboscope effect' under disco lamps. That's why car quickly rotating wheels suddenly start 'rotating in other direction' visually. That's why quickly rotating fans look like shady circles. That's why we perceive CRTs, LED displays and lamps (especially LED) not as blinking mess but as a stable moving picture or light source. That's why fast (especially bright/contrast) moving objects become lines in our sight. That's why we can watch anime without puking even if it's mostly 5-15 FPS at all. Etc. Etc.

It's non-linear. It all depends on overall light levels, scene contrast, distance & active eye focus, concentration on object, their position and speed of movement, our condition, age and level of fatigue, etc. Hell, even the vision position. Center is 'faster' than 'edges'. Eye sensors need to recharge as well, although they do that in an alternating patterns.

But overall, it's measured to 30-40 FPS at average. The problem with i.e. 30-40 Hz of lighting being 'blinky' is that our vision is also 'framed', and so if lighting up does not match our 'frame', we do see it only partially.

And that's it. Almost any complex motion over 60 FPS is bound not to be different. Except if it's a medium sized object (let's say apple sized) exactly in the center of vision at medium (3-5m) distances. Yes, you can perceive such objects faster. Yes, you can perceive full scene light up and down faster. And there were 'tests' and 'articles' on that. But you can't do this constantly in a minute or more time frame. Or on more than full scene toggling light up and down or a single object moving. Eyes have their limits. Brain has its limits.

Then how 30 to 60 FPS are that smoother, and 120 FPS are a bit even more smoother?

Interpolation, sir.

Eye sensors are 'charging' (or should I say 'depleting') with light hitting them and so they accumulate the effect, much like the photo cameras (yes, exposure time, exactly) do. Thus, changes happening at speed eye can't perceive are averaged together into a single 'frame' in each of sensitive elements of our eye.

Let's talk MSAA, on a good resolution display and appropriate distance.

No MSAA to 2x MSAA is usually hugely different and better, yes? That's our '30 FPS' (on the average of vision speed). Yes, these 30->60 FPS is that 'MSAA 2x' for the vision speed that makes moving things much smoother, like MSAA 2x does for the subpixel resolution that cannot be rendered.

Now take 60->120 FPS. That's MSAA 2x->4x. Yes, some will notice the difference. But many will tell it's not worth the hassle and extra resources or it gives nothing.

Now take MSAA 4x->8x... You know... No difference. Almost. And that's what happens with FPS bloat as well. The more we bloat, the less differences there are.

---

P.P.S.

Personally, I'd appreciate 8K more than any of >120 FPS on commonly available displays. And yes, better color calibration. That's because photos do look like awry on poorly calibrated displays. And because 4K dots are still distinctly perceivable, so not even close to the eye limit even on 32". While 120 FPS is already times over the top.

---

P.P.P.S.

What I wrote is very simplistic aggregation of the principles. It's not exactly correct if we go into detail, it's just the layman basics. If one wants, one can study more on the matter - and find different proofs and some divergencies to that. Or one can even just believe 240-1000 FPS will magically work improving one's reaction time, what's interesting here is that if this belief is strong and self-convincing enough, it will. :) Just 'cause brain likes stimulus. So yeah, it may still work for some even if it works in a bit different way.
 
Last edited:
  • Like
Reactions: A Stoner
It's not really about what you can see but also about things like input delay. Not that sub 10ms is something to fight about, but reducing latency to 0 is something that people will strive for.
True about input delay, or likely output delay, and speed from reality to viewable that you can respond to. That I can appreciate as a benefit, but again, I think the number of people who can take advantage going from an output lag of 10ms to 3ms is small and likely the number of people who can appreciate the change from 3ms to 0ms is absolute 0 or at the very least, unmeasurable.
 
  • Like
Reactions: UnforcedERROR
But overall, it's measured to 30-40 FPS at average. The problem with i.e. 30-40 Hz of lighting being 'blinky' is that our vision is also 'framed', and so if lighting up does not match our 'frame', we do see it only partially.

And that's it. Almost any complex motion over 60 FPS is bound not to be different.
If I had a penny every time somebody equated the flicker-fusion threshold to the limits of motion perception I'd have enough to buy a 5090. It remains a myth, as the flicker-fusion threshold only applies to illuminants, not images. Actual motion perception is far more complex.
Eye sensors are 'charging' (or should I say 'depleting') with light hitting them and so they accumulate the effect, much like the photo cameras (yes, exposure time, exactly) do.
That is not remotely how rod and cone cells work, let alone how the visual cortex processes action potentials.
 
If I had a penny every time somebody equated the flicker-fusion threshold to the limits of motion perception I'd have enough to buy a 5090. It remains a myth, as the flicker-fusion threshold only applies to illuminants, not images. Actual motion perception is far more complex.

That is not remotely how rod and cone cells work, let alone how the visual cortex processes action potentials.
That is how they work. Especially rods have very high delays, their reaction time is almost immediate but recharge time is slow. Cones are times faster, but still need to recharge.

And no. It's not about flicker fusion or level change speed, more so not about equating the two. I especially distincted between the two in my large above post (flicker Hz and motion FPS). But the thing is, while available speed margins of these effects are different, the underlying mechanisms that cause both flicker fusion and motion averaging (interpolation) are the same in the eye.

Immediate WtB or BtW is fast, but then WtBtWtBtW... or BtWtBtWtB... is very slow on the second and consecutive stages (although as mentioned, the first WtB/BtW transition is extremely fast). And it also averages! GtG is kinda speedy :) but has its own quirks like reduced contrast and pulse sensivity. And is also averaging, just at a higher rate.

Also remember we are talking chemistry and analog levels, so reaction is not a distinct pulse, just a change of pulsation level and/or intervals. With reception level adaptivity in the brain to make it more puzzling (it's not only eyes that adapt to the light/darkness, it's also brain adjusting contrast levels...).

Just google for photoactivation mechanics studies, not myths or myth busters. It's something to read and take note of. If you google further, you can even find reaction time studies. There is no concrete answer to 'how exactly', but overall sensors signal (photon) accumulation and need to recharge is not anything new. When i.e. rod receives photon, it reacts, but it does not mean reaction stops if it receives more photons. Levels still change, but slower, averaging the effect.
https://www.sciencedirect.com/science/article/abs/pii/S1095643308007022
etc.

Brain motion perception is adaptive, not only vision related (yeah sound does play a role - and even a kind of reverse channel is present: https://www.illusionsindex.org/i/skipping-pylon ), thus extremely complex, I think no concrete studies explain it deep enough at the moment.
 
Last edited:
This might start to approach the needs of that one guy, can't recall if it was here or another forum, who claimed that he could see flicker/lack of smoothness at anything less than 1000Hz refresh.

This product was clearly made to show that particular user that their goals shall soon be within reach. Today, 720Hz. Tomorrow, surely 4-digit refresh rates!
 
Basically, you won't notice the difference *unless* you imagine yourself doing it (placebo effect).
Hence why I said we shouldn't fight about it. Regardless of if you perceive something or not, that does not take away the objective fact that less is better. I'm not getting into the rest of your post, mostly because I wasn't trying to debate anything specific regarding human reaction time.

True about input delay, or likely output delay, and speed from reality to viewable that you can respond to. That I can appreciate as a benefit, but again, I think the number of people who can take advantage going from an output lag of 10ms to 3ms is small and likely the number of people who can appreciate the change from 3ms to 0ms is absolute 0 or at the very least, unmeasurable.
I mean there's likely no way someone would notice it, but that input advantage being there is still technically a form of competitive edge. It's certainly not something I'd buy into, personally, outside of maybe the novelty of its existence.

Meanwhile, Joe Random will drop in here stating he can definitively see the difference in Game X at 600FPS vs 500FPS, on a 5 year old monitor.
I'd say the most anyone would notice is if the game is fluctuating wildly, far more than 600 to 500. But that would be less about the picture and more about the input (unless we're talking sub-100 fps or something). But, yes, cognitive bias is huge.
 
Isn't it obvious? You have to 1up your older products and competitor's specs in some form otherwise you'd be seen as stagnant, and investors don't like stagnation.

The improvements are incremental on OLED, but you have to realize OLED itself was a breakthrough (debatable) compared to LCDs.
Next gen is what? QD-LED and MicroLED?
 
1/60 delay is 0.016s. 1/120 delay is 0.008s. 1/540 delay is 0.0018s.
These numbers are just marketing and self-convincing. And making money from that.
Human body does not work that fast, typical eye to movement reaction time is on the scale of 0.05s-1s.
Basically, you won't notice the difference *unless* you imagine yourself doing it (placebo effect).

---

P.S.

The bottom line is, our eye-brain channel and reception/recognition has finite and pretty much measurable 'speed'. That's why we are able to have a 'stroboscope effect' under disco lamps. That's why car quickly rotating wheels suddenly start 'rotating in other direction' visually. That's why quickly rotating fans look like shady circles. That's why we perceive CRTs, LED displays and lamps (especially LED) not as blinking mess but as a stable moving picture or light source. That's why fast (especially bright/contrast) moving objects become lines in our sight. That's why we can watch anime without puking even if it's mostly 5-15 FPS at all. Etc. Etc.

It's non-linear. It all depends on overall light levels, scene contrast, distance & active eye focus, concentration on object, their position and speed of movement, our condition, age and level of fatigue, etc. Hell, even the vision position. Center is 'faster' than 'edges'. Eye sensors need to recharge as well, although they do that in an alternating patterns.

But overall, it's measured to 30-40 FPS at average. The problem with i.e. 30-40 Hz of lighting being 'blinky' is that our vision is also 'framed', and so if lighting up does not match our 'frame', we do see it only partially.

And that's it. Almost any complex motion over 60 FPS is bound not to be different. Except if it's a medium sized object (let's say apple sized) exactly in the center of vision at medium (3-5m) distances. Yes, you can perceive such objects faster. Yes, you can perceive full scene light up and down faster. And there were 'tests' and 'articles' on that. But you can't do this constantly in a minute or more time frame. Or on more than full scene toggling light up and down or a single object moving. Eyes have their limits. Brain has its limits.

Then how 30 to 60 FPS are that smoother, and 120 FPS are a bit even more smoother?

Interpolation, sir.

Eye sensors are 'charging' (or should I say 'depleting') with light hitting them and so they accumulate the effect, much like the photo cameras (yes, exposure time, exactly) do. Thus, changes happening at speed eye can't perceive are averaged together into a single 'frame' in each of sensitive elements of our eye.

Let's talk MSAA, on a good resolution display and appropriate distance.

No MSAA to 2x MSAA is usually hugely different and better, yes? That's our '30 FPS' (on the average of vision speed). Yes, these 30->60 FPS is that 'MSAA 2x' for the vision speed that makes moving things much smoother, like MSAA 2x does for the subpixel resolution that cannot be rendered.

Now take 60->120 FPS. That's MSAA 2x->4x. Yes, some will notice the difference. But many will tell it's not worth the hassle and extra resources or it gives nothing.

Now take MSAA 4x->8x... You know... No difference. Almost. And that's what happens with FPS bloat as well. The more we bloat, the less differences there are.

---

P.P.S.

Personally, I'd appreciate 8K more than any of >120 FPS on commonly available displays. And yes, better color calibration. That's because photos do look like awry on poorly calibrated displays. And because 4K dots are still distinctly perceivable, so not even close to the eye limit even on 32". While 120 FPS is already times over the top.

---

P.P.P.S.

What I wrote is very simplistic aggregation of the principles. It's not exactly correct if we go into detail, it's just the layman basics. If one wants, one can study more on the matter - and find different proofs and some divergencies to that. Or one can even just believe 240-1000 FPS will magically work improving one's reaction time, what's interesting here is that if this belief is strong and self-convincing enough, it will. :) Just 'cause brain likes stimulus. So yeah, it may still work for some even if it works in a bit different way.
So many wrongs in this "I know it all" comment. The human vision is NOT framed at all and the examples you give here are actually caused by the opposite: our continuous (non-discrete) perception. The wheel or fan effect can only be seen in movies or under artificial lighting (lamps powered by alternating current strobe at 60 Hz in North America, for example). Since the brain sees in continuous, it needs to do something with the missing information and thus fills the gaps between the images, leading to those optical illusions. It's absolutely not because it sees in "frames", despite what all the ignorants on Reddit say, it's actually the exact opposite.

And I can't believe we are still debating this "most people can't make the difference above 60 FPS" or "movies at 30 Hz are perfectly smooth" myths. Movies at 30 Hz would be very choppy if they didn't put a load of motion blur in them. And I can tell you that anybody who is used to 120+FPS can definitely tell when a game runs at 60 FPS. It's really awful and feels very laggy. And it's not because of the smoothness of the image, it's because of the delay between your input and what happens on the screen, and this can be felt at a much smaller time scale that the actual image motion perception. Most experts say that most people can feel an improvement up to about 140-160 FPS, and a minority (like pro e-sport players) can feel it up to several hundreds of frames per second. And since the brain doesn't see in frames, there's no precise limit where you can no longer tell there's a difference, it mostly depends on people's sensitivity.

So yes, for most people a 400 Hz monitor doesn't do much more than a placebo effect, but it doesn't mean that a 700 Hz monitor is totally useless. The e-sport pro players who want to make sure they get any advantage they can would surely jump on this, like olympic athletes who get any authorized equipment that can provide them with the smallest advantage to make them win the competition by a fraction of a second.

Moreover, this is called technology advancement and it's not because someone like you doesn't see any uses for it right now that we should just give up. We would still be living in caves if we all thought like this.
 
  • Like
Reactions: helper800
Moreover, this is called technology advancement and it's not because someone like you doesn't see any uses for it right now that we should just give up. We would still be living in caves if we all thought like this.
This is technological "advancement" purely to take money off the gullible, it serves no other purpose.