News Sam Altman teases 100 million GPU scale for OpenAI that could cost $3 trillion — ChatGPT maker to cross 'well over 1 million' by end of year

Admin

Administrator
Staff member
Even as these things start pushing into kilowatt power consumption territory, performance per watt is going up.

100 million GPUs is not achievable for OpenAI very soon. It could take over 5 years of the AI bubble not bursting and significant new capacity being rushed online to make more wafers.

10-20 years from now, we could see a push into 3D designs/architectures that blow everything that came before them away in performance per watt, creating a ton of power-guzzling e-waste. Like we may see 10-100x perf/watt by 2040 from conventional designs, and 10,000x or more after that from exotic 3D, optical/analogue, etc.

It's not like this hasn't happened before, but we have been lured into complacency by the slower progress of the past 10-20 years.
 
But don't worry, investors, he's going to totally make all that money back for you.

drevil_cover.jpg


He pinky promised.
 
I have a feel that either the AI bubble will burst soon, or we have a new accelerated way to self destruct our environment and extinct our own species at nuclear war peace... AI (LLM type) no doubt have helped do replace some donkey works like drafting some formal boring letters or use it as a more powerful Google for some generic answers.

But time and time again when it comes to something really professional, or say, specialized knowledge, since LLM is token based training from existing documents, it almost always fail to give something really useful and more ofter some "looks professional BS", so in that sense, in the most critical/difficult use cases it more than often will trick you into trouble.

Now with the progression of LLM models improvements the competition for the gigawatt power the GPU array use to train it, and the more power and material to produce all those AI GPUs are creating astronomical amount of pollution and waste heat dumping into our planet.

In the past decade we are trying to use green energy to slow down our own extinction, and now we build local power plants just to power the AI array sounds really suicidal
 
Now with the progression of LLM models improvements the competition for the gigawatt power the GPU array use to train it, and the more power and material to produce all those AI GPUs are creating astronomical amount of pollution and waste heat dumping into our planet.

In the past decade we are trying to use green energy to slow down our own extinction, and now we build local power plants just to power the AI array sounds really suicidal
https://ourworldindata.org/energy-production-consumption

Global energy consumption typically increases at least 2% a year, sometimes as much as 5%. It spiked in 2021 by +5.09% following a 3.48% pandemic-related decline in 2020. Then it was +1.82%, +1.88%, and finally +2.61% in 2024. Maybe that is indicative of the AI boom, but it's not a significant deviation from the norm.

It would be better if we were using less energy to achieve similar results, and driving that down every year, but that's unrealistic, especially with the global population increasing every year by about +0.85%.

So if "AI" or other technology are being used to accelerate scientific and technological progress (to counter the negative effects of our civilization), we might as well spend whatever power is demanded of it. Big LLMs from OpenAI et al. aren't doing that much good for science, but that could change later. The big exascale supercomputers that are being used for science are pushing 30-40 megawatts each. The top 10 on the Top500 list could be using over 166 megawatts at the same time (probably much less in practice).

~186,000 terawatt hours globally translates to over 21 million megawatts continuous. If OpenAI starts using a gigawatt in 2026, that's 0.00476% of all power (one twenty-one thousandth). That's not a significant heating/pollution impact. If OpenAI and all of its competitors started using 100 gigawatts in total, which would be nuts and maybe infeasible, It'd be around 0.5%. Big, but not astronomical.

I didn't cover energy production needed to continuously manufacture and ship millions of GPUs or AI accelerators, but I'd bet it doesn't come close to global fertilizer production (2% of energy).
 
Last edited:
Where is the outrage from environmentalists? Seems like a lot of energy and resource drain just so some teenagers can cheat on their homework and those people bad at their jobs gain 2% more productivity...
 
Where is the outrage from environmentalists? Seems like a lot of energy and resource drain just so some teenagers can cheat on their homework and those people bad at their jobs gain 2% more productivity...
Even if they use a gigawatt, It's not even 0.01% of global power. It's a drop in the bucket compared to typical annual increases in energy consumption driven by a growing global population (that could grow another 35%).
 
Even if they use a gigawatt, It's not even 0.01% of global power. It's a drop in the bucket compared to typical annual increases in energy consumption driven by a growing global population (that could grow another 35%).
The point is it is basically way past the diminishing return threshold and spending extra gigawatt for nothing practical except gimmick, like Grok 4's auto undressing AI partner.....

The practical benefit on current LLM style AI have reached it's stall threshold and now we're wasting power and silicon, especially the precious TSMC capacity to fuel this "next Chat GPT"
 
This is the same investment scam we saw from the 90s with the dot bomb craze. Seriously people have more money than sense. I have not seen this amount of money pour forth since a televangelist declared God wanted him to buy a private jet so he could spread the gospel.
 
The point is it is basically way past the diminishing return threshold and spending extra gigawatt for nothing practical except gimmick, like Grok 4's auto undressing AI partner.....

The practical benefit on current LLM style AI have reached it's stall threshold and now we're wasting power and silicon, especially the precious TSMC capacity to fuel this "next Chat GPT"
We don't know if LLMs are done improving. They look like they've hit a plateau, but a bunch of competing approaches are being worked on and one of them could take over. 1 GW is not that different than 300 MW, and could allow OpenAI to support more users or train faster.
 
We don't know if LLMs are done improving. They look like they've hit a plateau, but a bunch of competing approaches are being worked on and one of them could take over. 1 GW is not that different than 300 MW, and could allow OpenAI to support more users or train faster.
I recall there's some paper discussing that the current token based LLM is hitting a hard wall, because they are combination (token) and chance based nature, we simply run out of training materials in human record to make any further improvement in usability and data accuracy. 1GW is just the running cost, but wasting all those silicon and power to produce those mega AI chips are at the same time creating a shit ton of pollutants and heat for practically nothing.