News OpenAI has run out of GPUs, says Sam Altman — GPT-4.5 rollout delayed due to lack of processing power

Does AMD have shortages on MI300X and MI325X? Anyone whining about AI GPU shortages needs to realize not everyone can go to a single vendor and expect there's plenty to go around. Partnering with Broadcom will eventually help, but there's only so much leading-edge node capacity available, so some compromises have to be made.

And boo hoo for Sam Altman. It'll arrive when it arrives.
 
But Intel did inventory write off of gaudi and AMD isn't selling as many MI300s as expected. Seems like there is no room for more than one company in the AI accelerators space.
 
Does AMD have shortages on MI300X and MI325X? Anyone whining about AI GPU shortages needs to realize not everyone can go to a single vendor and expect there's plenty to go around. Partnering with Broadcom will eventually help, but there's only so much leading-edge node capacity available, so some compromises have to be made.
For quite a while, the limiting factor has been HBM & advanced packaging capacity for it, not fab capacity for the compute dies. All HBM production for 2025 has already been sold.

Also, I might have this wrong, but I think I read that Micron diverted some of its GDDR7 (?) production to make HBM, instead. If so, perhaps it could have something to do with the RTX 5000 scarcity problems, at launch.
 
I'm boggled by these numbers.
I think very highly of 4o, when properly used, yet here he is bottlenecked on a version 4.5 that he admits is virtually the same just 10x-15x more expensive.
This must be the $80b that Nadella says Microsoft is still spending this year, and it seems a waste.

Within ten years they'll have algorithms that can run this stuff 100x more efficiently on even today's hardware.

Also, do we know if Altman is using only new B200s or only old H100s or what, for inference?
 
We probably see now why "AI" hoax will not take flight anyhow anytime soon.
It's just that ineffective in terms of energy. Megawatts for random unclean answers, that's all what "AI" is capable of.
And nope, "different hardware" is nowhere along the way. CPUs and GPUs transistor sizes are almost reaching physical limits.
 
AI is depicted as the savior for all our problems. So far it has only compounded those problems. One day we as a species will have created an AI so smart it will be able to convince us, it was incredibly stupid to use absurd amounts of resources and energy, to make AIs that do the same thing as a simple internet search.
 
  • Like
Reactions: KyaraM
There's a good paper explaining the "AI" I usually love to repost, so doing it again
https://link.springer.com/article/10.1007/s10676-024-09775-5w
The paper discusses LLMs, not "AI". Large Language Models are a small subset of the broader AI field. Even concerning neural networks, they're only one type. Finally, it's dated June 2024. This is a rapidly moving field and, as the technology evolves, some of its claims might lose their applicability.

I'd just say that if something walks like a duck and quacks like a duck, how do you know it's not a duck? When you have an interaction with an AI that can reason and seems intelligent by every measure, you should ask yourself why that's not intelligence.

For now, yes. LLMs are basically trained to give plausible responses, without so much regard to whether or not they're actually true. We've accidentally trained BS artists. In spite of that, what's giving a lot of people pause is that we're learning a lot of cognitive tasks we assumed were only possible with full General AI are quite doable with more limited and purpose-built models. This has significant economic implications.
 
  • Like
Reactions: jp7189
Am I the only one that's unnerved by the head of OpenAI saying there's "magic" that can be "felt" when talking about a digital computer model? I truly hope he's not starting to believe a collection of float point numbers is becoming something more than it is.
 
Am I the only one that's unnerved by the head of OpenAI saying there's "magic" that can be "felt" when talking about a digital computer model?
I've never followed him closely, but I just treat that as basically a sales pitch. He really needs to keep the momentum going, because by now most people have had a variety of experiences with ChatGPT and he's got to try and convince them that their latest, greatest, and much more expensive models are truly better and worth using.

Not to mention newer, cheaper alternatives like DeepSeek. In fact, I wonder just how big a bite out of their revenues DeekSeek has been taking.
 
a lot of cognitive tasks we assumed were only possible with full General AI are quite doable with more limited and purpose-built models. This has significant economic implications.
These models are all giving randomized BS, and that's all. Yes, it will be milked for money until it's wide open.
The only economic implication is going to be even more heavy lack of energy resources. Time lost not included.
 
Yes, there are very refreshing purpose-built applications for return-based matrix manipulations, like maintaining dynamic physics (i.e. weight) balance in a changing environment.

But calling this "AI" is a very long shot.
 
These models are all giving randomized BS, and that's all.
One day, I predict you're going to have an interaction with something you believe is a human and it'll fully break you, when discover it's not. Until then, I'm sure you'll keep you head buried deep in the sand and will keep singing from that hymn sheet.

It's like @baboma said, ego defense is a very powerful influence on human behavior.
 
DeepSeek won't get far in the Western world. Many countries and orgs have banned it because of its China roots. But the idea of it (cheap LLM training) will have wide reverberations.
We've already heard of people switching from ChatGPT to doing their own inferencing of DeepSeek models. As long as you're not concerned about its handling politically-sensitive subjects, this doesn't pose the risks that using their cloud service would.

In other cases, there are consumers and end users who might balk at OpenAI's price increases and instead switch to DeepSeek for something that's usually good enough for their needs.
 
  • Like
Reactions: jp7189
One day, I predict you're going to have an interaction with something you believe is a human and it'll fully break you, when discover it's not. Until then, I'm sure you'll keep you head buried deep in the sand and will keep singing from that hymn sheet.

It's like @baboma said, ego defense is a very powerful influence on human behavior.
Luckily and rarely nowadays, I still do have enough brains to sort "AI" hoax bullshit out.