News Nvidia CEO Jensen Huang says $7 trillion isn't needed for AI — cites 1 million-fold improvement in AI performance in the last ten years

Status
Not open for further replies.
It's only logical Jensen would say this. If Altman's Custom AI Silicon becomes a reality, it's a competitor for NVIDIA which will eat into their profits.

His statement boils down to: "Don't give our competitor money." which is a sign of worry.

Proper reply would have been: "Good luck keeping up with us."
 
  • Like
Reactions: bit_user
It sounds like he forgot that he's not just talking to a bunch of knuckle-dragging gamers, this time.


"Remember that the performance of the architecture is going to be improving at the same time so you cannot assume just that you will buy more computers," said Huang. "You have to also assume that the computers are going to become faster, and therefore, the total amount that you need is not going to be as much."
People know this, but they also know that future AI networks will also increase in size & sophistication. So far, the number of parameters has increased faster than the speed of GPUs. Whether that will continue I can't say, but it seems a safe assumption that it will increase and therefore you'll not be training something like a GPT-n model on a single GPU, at any point in the foreseeable future.


"One of the greatest contributions we made was advancing computing and advancing AI by one million times in the last ten years, and so whatever demand that you think is going to power the world, you have to consider the fact that [computers] are also going to do it one million times faster [in the next ten years]."
This is even more unhinged than the last claim! A lot of those multiples came from one-time optimizations, like switching from fp32 to fp8 arithmetic, introducing Tensor Cores, and supporting sparsity. That low-hanging fruit has largely been picked! There aren't a lot of other easy wins, like that.

Furthermore, density and efficiency scaling of new process nodes is slowing down, as are gains in transistors per $. What that means is that manufacturing-driven performance scaling is also mostly exhausted.

So, I don't see any reason to believe the insane pace of AI performance improvements will continue. There will still be gains, but more on the order of 10x or maybe 100x over the next 10 years. Not a million-fold.
 
  • Like
Reactions: digitalgriffin
I have my fears of an impending market crash. It is way to top heavy with overspeculation and investors tend to become skittish at a moment's notice. The reality is that the infrastructure is just not there for the hype. The A.I. language models are still using the same neural network method that was being researched decades ago. It's just that we finally have the right pieces coming together to utilize it. The wide availability of data and hardware/software to utilize it has been at play for more than a decade and most of the utility remains the same since then just at a more precise level with less error rates.

The utility hasn't changed much and it's rather limited given the hype and investments going in.
Just like with the dot com boom, the utility just wasn't mature enough at the time. About a decade later and so much changed with sites like YouTube and Facebook.
LLMs definitely have potential but I don't think it warrants this type of hype yet. The shovel sellers are making the majority of the profits. Companies like Nvidia or those who build data centers. Most companies have no idea how to properly utilize it and there is a severe lack of people employed who understand A.I. generative language models.
 
The A.I. language models are still using the same neural network method that was being researched decades ago.
You're talking about how they're trained? The innovation is in their design & implementation, not the training method!

Also, kinda weird that you focus on LLMs. They're only one thing happening in AI, right now. OpenAI is rumored to have made significant strides in AGI, for one thing...

most of the utility remains the same since then just at a more precise level with less error rates.
The utility requires reasonable low error rates, or the output is just useless rubbish.
 
It is way to top heavy with overspeculation and investors tend to become skittish at a moment's notice.
To be perfectly fair, that's just markets in general.

I don't believe we will face a market crash but I expect investments will taper off when the technology fails to advance as fast as expected as investors tend to expect a relatively quick ROI and don't tend to be much for long-term planning.
 
Status
Not open for further replies.