You want to remember that at least up to a few years ago the whole credit for the success (such as it is) of the LLM's has been scale, scale, scale. And the simplest projection is a linear (or better) increase with no limits. So if it's true, the first guy to a trillion dollars for scale, wins. Rules the world, perhaps.
This has been a question about AI since the early days - "maybe there's some critical mass, like with uranium and plutonium, and that's all there is to it!"
And the odds are that this is at least partially right, and partially wrong. Maybe scale still has a ways to go, but maybe the *cost* of scale can come down. Maybe some algorithm tuning can make training 10x, 100x, 100000x more efficient.
The human brain runs on about 20 watts and fits in goldfish bowl, so if we were just doing it right, wouldn't AI?
So my point? Oh yeah. The efficiency and cost of training is already coming down by a factor of 10x or so, and promises to continue, and Altman knows that (as much as he knows anything). This is just his act, his schtick, make of it what you will. OpenAI is doing some good work while he zooms around saying silly things.