News The refresh that wasn’t - AMD announces ‘Hawk Point’ Ryzen 8040 Series with Zen 4, RDNA3, and XDNA

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
A trainable, offline model is the best way to do it, IMHO. I want the flexibility to work in highly specialized contexts.
You are never going to have the compute resources to train a model like GPT-4.

However, what we could get is a model that can figure out what information it needs to look up, find it, and then reply with that new knowledge at its disposal.

This is also what's ultimately needed for Toms' HammerBot, I think. People expect it to have memorized every single detail about every model of every piece of hardware, but LLMs don't work like that - they're not databases. They pick up on general patterns and concepts. If a fact is repeated enough, they'll learn it, but it's not sufficient for them to see something only once - or even a few times.
 
Define thinking.
If I were stupid I would tell you "ask to ChatGPT".
It's as much statistics as your own brain. At
A bit offensive don't you "think" so ?

some point, a model of a system transcends a mere statistical model and becomes something different. Perhaps the
Yes, this appen in every SF movie. In the real world, if you add neurons to neurons in a casual manner and give more and more inputs what you obtain ? Only a biiig mass of neurons that does "think" no more than the little mass.

distinguishing factor is that we expect statistical models to hold for aggregates, but not apply in highly-individualized scenarios.
And why not ? An "individualized scenario" is viewed as a normal input from the statistical model that being "not intelligent" aka "not thinking" cannot distinguish the difference so produce the most probable output.

Another thing you can't do with mere statistics is to generate structured data, the way generative AI synthesizes text, images, and videos.
You can generate structured data with softwares without AI from years but nobody claim that softwares are thinking.

Given that you clearly haven't ever taken a course or read a book on the underlying numerical & algorithmic methods behind neural networks, I don't know how you feel
For sure I'm not an expert, but certainly I know how neural networks works.

qualified to make such strong assertions about its fundamental nature.
So please suggest me a book in which the author assert that actual neural networks are capable of "real thinking" or are "intelligent" for real.
And please, stop philosophize on the meaning of "thinking" and "intelligence".
Thanks.
 
A bit offensive don't you "think" so ?
If that's how you mean it, then maybe you don't know as much about neural networks as you believe.

You can generate structured data with softwares without AI from years but nobody claim that softwares are thinking.
But you're not calling that software "statistics". Moreover, the software was designed to make such structured data, whereas generative AI learns to do it.

So please suggest me a book in which the author assert that actual neural networks are capable of "real thinking" or are "intelligent" for real.
What you probably mean by '"intelligent" for real' is called "general AI". Nobody is saying LLMs are at that level, yet. I never said they were, either. I just said it's not mere statistics.

Calling it "statistics" completely ignores the fact that AI learns. The act of learning involves creating structures to model concepts and abstractions. That's well beyond the scope of what can reasonably be considered statistics.

And please, stop philosophize on the meaning of "thinking" and "intelligence".
I only asked you to define what it is you're saying LLMs can't do. If you can't even define it, then how can you be so sure they aren't doing it?

BTW, I didn't say "intelligence" - you're the one who went there.
 
Last edited:
Status
Not open for further replies.