actually it depends on the user and his application.. some people like me actually DO think about it.
Example: I write my own code to do mapping. The idea is to show geopolitical events over time by updating a displayed map of an area (think world and migration of language or the Roman conquests of Europe, Asia or Africa). A typical map may have 16 M triangles to form areas on the earth. The projection to be used is user selectable. The orientation of the projection is user selectable and can change in real time. The globe can rotate at real time based on either user interaction or the passing of world time (one year per second).
A simulation consists of updating projected area, text, color of a rotating projection at 60 frames a second. A frame may contain maybe 16 M floating (double and divides ) calcs, packaging for OpenGL, and a redenering by the graphic engine. Hstorically, bandwidth is the issue. There is nothing gained by 6 vs 8 cores because memory BW is slow and the cache is too small. But while I watch or do the simulation , I can interact with it. The load is dynamically spread across the pools of cores based on the cycle times o each core on the last frame.
When I run it, the fans are high and florida power has to kick in another nuclear reactor.
I know I might be better off with a desktop, but I am among the poor and stupid that will do this crap from a 44ft sailboat, RV, my boycave or back lania at minutes at a time.
Not EVERYONE is the same. But I'm weird, retired and have no life.