News Jensen says we are 'several years away' from solving the AI hallucination problem — in the meantime, 'we have to keep increasing our computation

bit_user

Titan
Ambassador
Jensen said:
We have to get to a point where the answer that you get, you largely trust.
I think it's probably not too hard to train AI to check its answers in similar ways that humans would check either an AI-generated answer or maybe that of a human we have reason to doubt. This will certainly increase the time it takes for AI to reach good answers, as it would then have to do more iteration to pass such a built-in "BS filter".

I've heard that human brains have something like a "BS filter" that suppresses thoughts that don't make sense. Certain drugs inhibit this inhibitory interaction, which potentially explains why some people associate them with increasing creativity.
 
  • Like
Reactions: mitch074
I think it's probably not too hard to train AI to check its answers in similar ways that humans would check either an AI-generated answer or maybe that of a human we have reason to doubt.
except they can't "think" for themself if something sounds off. llm treat all data as "real" thats why "poisoning" is so bad to them as you cant untrain them like you can a human.

it trying to check if its training is right only works if its fully human input that has been verifiably true but data learns off other databases of llm so if u fact check a lie to a lie you assume its truth.
 
  • Like
Reactions: Sleepy_Hollowed

Gururu

Prominent
Jan 4, 2024
305
203
570
the ai he describes is heavily crafted by the human experience. I wonder if there are any outfits out there are approaching it the way nature would, only a billion times faster.
 

bit_user

Titan
Ambassador
The article said:
When someone pointed out that Nvidia’s AI GPUs are still expensive, Huang said that it’d be a million times more expensive if Nvidia didn’t exist. “I gave you a million times discount in the last 10 years. It’s practically free!”
This point is bogus. Furthermore, there seems to be some serious inflation of the claim, perhaps even greater than Nvidia's stock price!
: D

As far as I can tell, this is referring to "Huang's Law", where just over a year ago, he was claiming only 1000x speedup over 10 years.

The way he arrived at that figure was: 16x from better number handling, 12.5x from using reduced-precision fp and int arithmetic, 2x from exploiting sparsity, and 2.5x from process node improvements.

Most of those areas are wells you can't keep going back to. Sure, there has been further work on reducing precision and yet further improving sparsity handling, as well as weight compression and arithmetic approximations. So, I'm not saying the well of innovation has run dry. But, when you take a somewhat generic architecture and optimize it for a specific problem, a lot of the big gains are achieved early. So, I'm not expecting to see a repeat performance, here.

As for the claim of 1M, that remaining 1000x can only be referring to the scalability achieved through their HGX systems' utilization of NVLink and NVSwitch to link an entire rack's worth of GPUs into a coherent memory space. This last 1000x is perhaps the most troubling, since it only scaled somewhat linearly with cost. In other words, 1000x GPUs is going to cost you more than 1000x the price of one (assuming both are obtained through the same channel and not just comparing the retail cost of one vs. direct-sales cost of 1000). So, he provided the capability to scale, but not a real cost savings.

Briefly returning to the 1000x scaling from "Huang's Law", for a moment, a lot of these same things were being done by others. Plenty of people were looking at reducing arithmetic precision, building dedicated matrix-multiply hardware, etc. Nvidia was among the first to deploy them at scale, but it's not as if the industry wouldn't have moved at a mostly similar direction and pace, had Nvidia not been in the race.

Huang's biggest contribution was probably his aggressive push of CUDA and GPU compute and embracing the industries and applications where it was found to have the greatest potential. That's what I think gave them an early lead, down the path of AI. It's a lead they can keep, only if they're nimble, and here's where CUDA could turn out to be a liability. That's because it forces them to make their GPUs either more general than you really need for AI, or else they have to break CUDA compatibility and do a whole lot of work to adapt the existing CUDA-based software to newer architectures. This is what I think Jim Keller meant by his statement that "CUDA is a swamp, not a moat". It's now become an impediment to NVidia further optimizing its architectures in ways similar to its biggest AI competitors (who, other than AMD and Intel, generally seem to have embraced data-flow architectures over classical GPUs).
 
  • Like
Reactions: Trake_17

Notton

Commendable
Dec 29, 2023
873
769
1,260
Jensen is a clever businessman that learns from past mistakes.
His answer will always be: buy more of my merch to solve your problem.

IMO, it'll take a breakthrough in computing technology to invent real AI, and not this LLM binary compute stuff.
 
  • Like
Reactions: Trake_17
Why can't it do some internet searches and look at sources to confirm?
it can but it won't know to learn if right or wrong on spot as they aren't "smart" they are programs that are able to do what designed to.

in a video GN looked up release date of a gpu...the ai said wrong date, but linked his own video with correct date as source.
LLM are stupid no matter how smart they look.
on internet theres multiple answers for every question (be it trolls or just people statign stuff w/o facts)
the ai doesn't understand this world where theres answers for both that contradict the question.


having it "check" for a source does nothing when it can't tell if its a correct source or not and even if you could it would require a LOT more computational effort and time to give answer. (both which arent ideal as they want them to use less time and energy to do)

IMO, it'll take a breakthrough in computing technology to invent real AI, and not this LLM binary compute stuff.
thats why its a race for AGI (artifical general intelligence) by everyone as the 1st one to get to it wins the AI race and every other "ai" will of wasted all their time and money.

The ai race can have a billion runners but only 1 will have AGI as winner.
 

epobirs

Distinguished
Jul 18, 2011
220
27
18,720
I have great doubt that any amount of additional compute will solve the hallucination problem. My guess is that it's a structural issue. A smarter liar will make up better lies but they'll still be lies.

Which isn't a problem for some applications. If you wanted to make Star Wars movies based on the books that pick up right after 'Return of the Jedi', AI will probably do a great job in turning a screen play into a movie. Tweaking will be needed but the cost will be a small fraction of what it takes to make some of the recent disappointments.
 

DougMcC

Reputable
Sep 16, 2021
184
127
4,760
Jensen is a clever businessman that learns from past mistakes.
His answer will always be: buy more of my merch to solve your problem.

IMO, it'll take a breakthrough in computing technology to invent real AI, and not this LLM binary compute stuff.
At a fundamental level, human brains are not that complicated. If we believe that humans have intelligence, we need do no more than accurately simulate a human brain to achieve AGI. And we are rapidly approaching the compute necessary to do so. The only real question is whether we can do it more cheaply than that, and these LLMs hint maybe this is possible.
 

leoneo.x64

Reputable
Feb 24, 2020
11
9
4,515
Why can't it check a claim the same way you would? Why can't it do some internet searches and look at sources to confirm?
It can, But it isn't designed to. It's not designed to record, remember or cite sources. It's just designed to learn and take its word for it. That's why you can't even figure out when its hallucinating. More compute won't solve it. Not making it a black box will solve it!
 

watzupken

Reputable
Mar 16, 2020
1,178
660
6,070
This is the kind of “the more you buy, the more you save” kind of marketing. The more you buy, the faster you will solve AI hallucination. But in reality, LLMs lack reasoning ability. So training it with more garbage, is not solving the problem at all.
 
Jensen is a clever businessman that learns from past mistakes.
His answer will always be: buy more of my merch to solve your problem.

IMO, it'll take a breakthrough in computing technology to invent real AI, and not this LLM binary compute stuff.
We had canoes before we had speedboats !
Until that speedboat breakthrough happens his only choice is to stay the course!
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
I have great doubt that any amount of additional compute will solve the hallucination problem. My guess is that it's a structural issue. A smarter liar will make up better lies but they'll still be lies.
I think the main reason hallucinations happened is that these models weren't incentivized to judge and qualify how sure they were about something. That seems not too hard to remedy.

I find it kind of funny how people seem to act like humans don't spout BS all the time. Sometimes, we know we're lying or at least making tenuous claims, based on little underlying information. Other times, we misremember things or reach incorrect conclusions. Yet, when AI does these things, somehow it's treated like a new thing!
 
From what I can tell the only reason chatgpt/ai hallucinates is because it is being forced to answer regardless of whether or not it believes it knows the answer or has the required time to gather and form the answer from its multiple terabytes of stored data.

Imagine trying to give an oral book report on a book you only read the Cliffsnotes 10 minutes prior.
Some facts may be untrue, but you bend them to try and fit would you believe is the story and hope that your confidence convinces the teacher that you actually read the book.

(Totally not taken from past experiences from the book Island of the Blue Dolphins :p )
 
except they can't "think" for themself if something sounds off. llm treat all data as "real" thats why "poisoning" is so bad to them as you cant untrain them like you can a human.

it trying to check if its training is right only works if its fully human input that has been verifiably true but data learns off other databases of llm so if u fact check a lie to a lie you assume its truth.
It's more a matter of AIs being purely probability engines : if something looks like something else, then the answer fitting that "something else" should fit that something also. Mathematically, it's sound; in practice, not really.
 

ttquantia

Prominent
Mar 10, 2023
15
14
515
Hallucinations are not going to go away, because A) the training data will always be flawed and contain wrong and misleading "facts", or opinions that look like facts, B) you cannot decide "truth" by statistics ("most people seem to think so and that's why it is true"), C) and even if all the training data was factually correct, there is nothing in LLMs or other generative AI in general that would guarantee that from true facts you always get true facts.
Hallucinations are here to stay.
Bying 10 X or 100 X more GPUs would of course help Nvidia, so in this respect the Nvidia salesperson is right.