What Geofelt said.
Everyone needs to understand that we live in a new age of CPU frequency behavior, where frequencies are heavily influenced by a multitude of factors, instead of favoring a single frequency alone.
This is more efficient for CPU manufacturers to implement, and allows CPUs, GPUs etc to maximize the amount of performance available to the entire system, rather than the chip being the bottleneck itself.
There can be some unfavorable side-effects from running chips at hot temperatures, like dried up thermal paste after just a few years. But its not dangerous to run these chips at such high temps, unless you want it to last for an abnormally long time.
I believe people don't realize, that CPUs and GPUs have a silicon health curve, and they know what safe voltages to apply at any given temperature. For AMD CPUs, GPUs and Nvidia GPUs for instance, this is why they start dropping clocks and voltage well before hitting TJmax.
This isn't anything new either, laptop CPUs have been running at 90-100C for decades at this point, and I can't think of a single mobile processor that has died from "overheating".