Nicole Express explains the Super Game Boy hack that makes the device run faster.
Super Game Boy Overclocked to 5.35 MHz : Read more
Super Game Boy Overclocked to 5.35 MHz : Read more
Sure there's tons of headroom in those things, but many games simply rely on the internal timers just as why the Turbo button existed in the first place. If you put the CPU clock faster the game, app and such will run faster (and quite unsuspected).I always thought Sony should've released a special edition of the PS3 that was overclocked and had a SSD. You know there must've been some good overclocking headroom, considering it started out on a 90 nm process node for the CPU & GPU and finally shipped on a 45 nm / 28 nm node.
With the PS4, we finally got the Pro version. Basically the same idea, except it should've also come with a SSD.
According to the wikipedia page, Sony reduced the Rambus memory speed after the first version. So, it's not as if the hardware stayed completely fixed.Sure there's tons of headroom in those things, but many games simply rely on the internal timers just as why the Turbo button existed in the first place. If you put the CPU clock faster the game, app and such will run faster (and quite unsuspected).
Pretty much. CPU clock based timing stopped being a thing after the first gen of 3D consoles.According to the wikipedia page, Sony reduced the Rambus memory speed after the first version. So, it's not as if the hardware stayed completely fixed.
I think, by the time we got to PS3 games, most game programmers were using the realtime clock to update the game state. You really couldn't use loop-based timing once the game engine gets beyond a certain level of complexity. I think that era pretty much ended with sprite-based graphics. By the time you get to multi-core and 3D graphics hardware, you pretty much have to use the RTC to avoid jarring slowdowns and such.
King's Quest IV Perils of Rosella makes me upgrade from PC/XT (4.77 MHz without Turbo) to PC/AT (286 16 Mhz), becase without turbo you can't outrun the dog in the room 😉Sure there's tons of headroom in those things, but many games simply rely on the internal timers just as why the Turbo button existed in the first place. If you put the CPU clock faster the game, app and such will run faster (and quite unsuspected).
There is more than one way to skin a cat...they still use the capability of each core as a limiting device, the games are balanced so that the main loop fills one core and so that all the rest completes the multithread part fast enough for the main loop.Pretty much. CPU clock based timing stopped being a thing after the first gen of 3D consoles.
I think that was meant to be a conceptual diagram, rather than literal. If your game is designed to generate frames sequentially, then the natural structure is to have the main loop queue up work for the other cores and then block on their completion, more or less. I'm sure it's allowed to have some of those jobs spawn other jobs, to the extent possible, but the main loop needs to be what's initiating the cascade of the work & waiting on its completion.There is more than one way to skin a cat...they still use the capability of each core as a limiting device, the games are balanced so that the main loop fills one core and so that all the rest completes the multithread part fast enough for the main loop.
Nope, that is very literal.I think that was meant to be a conceptual diagram, rather than literal. If your game is designed to generate frames sequentially, then the natural structure is to have the main loop queue up work for the other cores and then block on their completion, more or less. I'm sure it's allowed to have some of those jobs spawn other jobs, to the extent possible, but the main loop needs to be what's initiating the cascade of the work & waiting on its completion.
The only way to beat that is to overlap production of sequential frames, but it comes at the cost of additional latency. I heard about one game (I forget which) that overlaps generation of up to 4 sequential frames, in order to achieve the best core utilization and the highest framerates. If it's something like a RTS game, then that would be an acceptable tradeoff. If it's a twitchy FPS, then no.
I didn't see that level of detail in the article you linked.Nope, that is very literal.
The frames don't rely on the main loop, one of the jobs on the other cores is what sends the data to the gpu to make frames.
That's why it loses sync if it is being run on a different CPU, it doesn't sync everything and then sends a frame to the GPU at the end of each main loop.
Everything , main and rest, is running as fast as possible at the same time next to each other without any syncing between them in the form of actual code, the only thing that makes it work is that it is "optimized" to the core speed and amount of cores.
(They put so much work in the main loop and in the rest that it balances out naturally)