News I Asked ChatGPT How to Build a PC. It Said to Smush the CPU.

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Titan
Ambassador
The author is nitpicking on what is a prototype. Even at its present form, the AI's guide is useful, as it forms a meaty framework that a human guide could flesh out with details, thereby shortening his workload.

ChatGPT (and its inevitable kinds) will inevitably be improved.
Not only that, but also consider that it wasn't trained on any carefully-vetted material. If you made sure to feed it only forum posts marked as "best answer" and actual motherboard manuals and PC repair books, I'll bet it would give better advice than most of us here!

IMO, step-by-step how-tos are not a high bar to clear. They don't require any creative thought or reasoning.
I think we're very likely to see it used in troubleshooting, diagnostic, and repair contexts. There has long been a field of AI known as Expert Systems, which seek to fill this role. ChatGPT has demonstrated a lot of potential, there.
 

bit_user

Titan
Ambassador
When an algorithm contradicts itself by saying:

"France has not won the 2018 World Cup. The 2018 World Cup was won by France."


...maybe it's time to admit ChatGTP lacks any intelligence whatsoever,
Maybe that just demonstrates its designers focused more on memory/recall than trying to ensure logical consistency of what it knows. Seems like a potentially fixable problem, to me.

But, if you enjoy talking trash about it, then I guess you'd better get your lulz in now. In a few short years, the AI will be making fun of us idiot humans!
; )
 

bit_user

Titan
Ambassador
the bot doesn't know, respectively isn't aware, that it doesn't even have any butt to begin with.
To some degree, it does. The model isn't nearly big enough to simply store every single fact in its training data, so it builds higher-level structures which (ideally) provide representational efficiency and logical consistency. We just don't know how sophisticated its knowledge representations are. And, just like a human, if you drill incorrect information into it enough, it's likely to stick.

But even children usually don't end up listing whatever as their accomplishment just because someone else claimed it to be an accomplishment.
It can take them a while to figure out what's what.

Humans are different, we posses the ability to critically think beyond our 'programming'.
You misjudge what "thinking" really is. There are already AI techniques, like GAN, that replicate key aspects of thinking and contemplation.

The problem is tools like ChatGPT and AI in general are expected to be like humans, which it cant (yet?)
True. It's a bit like a sophisticated parrot. Worse, due to its black-box nature and complexity, it's hard to know when to trust it and exactly the extent of its capabilities and knowledge. This is probably one of the biggest dangers AI's pose.
 
Last edited:

bit_user

Titan
Ambassador
ChatGPT is completely unreliable. I asked it to name the best DAWs (digital audio workstations) for Windows, and ChatGPT came up with "Logic Pro", which is only available for Mac OS and certainly not for Windows.
I'm actually amazed that it gave an answer that even sort of makes sense, given that it wasn't trained specifically to be an expert on that field.

It's weird how people seem to have these incredible expectations for it, as if it should now be as smart and knowledgeable about every domain as the sum of all humans!
 

bit_user

Titan
Ambassador
And lazy college kids are tripping over themselves to use this "fancified" Google search to help them with assignments??

Yeah, "the future is so bright, I gotta wear shades. "
I'm sure people said this, when kids started using sliderules ...and then when they used calculators ...and then when they used graphing calculators ...and then when they could start using their laptops to take exams.

A human brain is only so big. It behooves us to find our strengths and weaknesses, and then to learn how to harness technology to compensate or buttress the weaknesses so we can focus on our strengths.
 

bit_user

Titan
Ambassador
But thats what we're being led to believe.
Magic.
Who's saying it's magic? Not the designers or expert practitioners in the field.

Yeah, there's a lot of hype about it, because it represents a step-change relative to what came before. When you see a child start walking well enough to get around the house, you don't complain that it's not able to complete a marathon. Instead, you probably celebrate that you no longer have to haul the stroller with you, everywhere you go!
 

USAFRet

Titan
Moderator
Who's saying it's magic? Not the designers or expert practitioners in the field.

Yeah, there's a lot of hype about it, because it represents a step-change relative to what came before. When you see a child start walking well enough to get around the house, you don't complain that it's not able to complete a marathon. Instead, you probably celebrate that you no longer have to haul the stroller with you, everywhere you go!
The hype, as you say.

The ChatGPT, and college kids glomming on to it
Tesla
Idiots who have been talking about codeless programming for many years.

Because the child can take 2 steps, the marketing droids scream loudly...."New marathon champion!"

No one listens to "designers or expert practitioners". They see and believe the marketing idiots.


All I'm saying is...we're not even a little bit close yet.
 
  • Like
Reactions: COLGeek
Jan 20, 2023
1
1
15
I've been a reader of Toms since the early 90s and love it. Huge fan of all of the content.

However I dont think this author read enough articles about ChatGPT before deciding to write an article about ChatGPT, and sort of misses the point.

It is designed to be iterative. Sure the first answer is high level, but you can follow that first question with another... "give me more detail about inserting the CPU". Or "add more about all of the parts I will need". Or "is it important which pcie slot to use?" The tech will either answer your follow up, or if you desire, enhance the article with that content.

You could even say "rephrase from the perspective of a more experienced and empathetic person" or "explain as if you were Mr Spock". Its not perfect but if your mind is not blown by the results you are not paying any attention.
 
  • Like
Reactions: bit_user
Jan 20, 2023
1
0
10
That ChatGPT has no sense of sarcasm. If you asked it to write a Toms Hardware article on asking ChatGPT to build a PC, it would say "First smush the CPU firmly into its socket and then hit your head against your case like you're hitting a brick"
 
I hope every person who cluelessly has, literally, no clue, and would ever resort to "asking an AI" how to do ANYTHING, ends up on this forum so we can explain to them why they are complete and total idiots.

Ok, maybe not that harsh. But seriously, c'mon man. Literally. Exactly like the NFL reference. C'mon man. Is anybody ACTUALLY that dang dumb? I find it hard to believe yet we see evidence of it day after day. So my advice is, don't seek advice from an AI. Seek advice from a PB. (Proven builder).

Other than that, jebus, maybe you DO deserve it if you look that way.
 

apiltch

Editor-in-Chief
Editor
Sep 15, 2014
246
141
18,870
I've been a reader of Toms since the early 90s and love it. Huge fan of all of the content.

However I dont think this author read enough articles about ChatGPT before deciding to write an article about ChatGPT, and sort of misses the point.

It is designed to be iterative. Sure the first answer is high level, but you can follow that first question with another... "give me more detail about inserting the CPU". Or "add more about all of the parts I will need". Or "is it important which pcie slot to use?" The tech will either answer your follow up, or if you desire, enhance the article with that content.

You could even say "rephrase from the perspective of a more experienced and empathetic person" or "explain as if you were Mr Spock". Its not perfect but if your mind is not blown by the results you are not paying any attention.

FWIW, what inspired me to write the article is seeing how some publishers (ex: CNET) are already deploying AI bots (though not necessarily GPT-3) to write articles for them. I get that you can improve the quality of the output if you give it more and more specific instructions. However, in order to get really good answers, you'd need to know quite a bit about the topic so you could illicit the correct response and that's assuming that it has the correct data.

If you don't know what you don't know, you won't construct a query capable of getting the specific information you need. What we saw in the case of some publishers using AI is that they assigned human editors to read the copy the AI generated and they still missed major factual errors. So, if you assigned an experienced tech writer (someone who has built PCs) to edit the AI's article, I'm sure they would find the errors and fix them either by writing the correct copy themselves or by refining their requests over and over again until the AI produced the desired result. Actually, this is a lot like editing a human writer as sometimes we go in and change the copy ourselves and other times we send it back with queries.

However, as a rule, if we have a writer whose work we can't trust and who makes us feel like we could have done it better and faster if we wrote it ourselves, we don't keep working with that person. And we also assign editors who have experience with the topic at hand. Otherwise, they wouldn't know what information was missing.

The idea that we see some publishers using right now is that AI can generate content quickly and without significant human intervention. If it requires significant human intervention and correction, then it is not a good use of anyone's time or money.

I have been thinking a lot about self-driving cars and where those stand after years of hype. There are just a handful of self-driving vehicles on the road and most, if not all, require some kind of active human monitoring. Some of them have human drivers behind the wheel and others have a techie sitting behind a desk monitoring them remotely. At best, it's an expensive proof-of-concept, because the goal of having AI is for humans to be able to trust it to act on its own.

I feel the same way about ChatGPT. It's a very impressive proof-of-concept, but anyone who has really tested it out knows that it can't be trusted like a human can be (at least not at present). Perhaps you're saying "of course. They said it's in beta." But we're seeing real businesses use AIs to write content that real humans are relying on. So I think it's worth interrogating that notion.
 
D

Deleted member 14196

Guest
Define "intelligence". If you try really hard, I think you'll probably conclude you're mystifying something that's not as magical as we like to believe.
I believe you need to do far more research into the subject and you’ll come to the opposite conclusion. This is a huge thing with the big thinkers. It is not known how it works. So for you to say that is completely off base.

Without consciousness, there is no intelligence. Period.
algorithms can’t think or feel. They can’t be conscious. Not computers either so there’s no intelligence. I would love to see you prove me wrong, because then you can go up against all the panels of scientist to say you’re wrong.

prove me wrong and you’ll win the Nobel prize for sure

https://www.wordnik.com/words/intelligence

Read the first line.

noun The ability to acquire, understand, and use knowledge

The ability to acquire to understand and use the knowledge. I assert that computers will never understand the knowledge therefore, there is no intelligence. They can be taught to use data and maybe even have some sensors to acquire it, but they will never understand it. But data is not knowledge. The very definition of intelligence is violated.

knowledge is defined
knowledge

nŏl′ĭj
noun
  1. The state or fact of knowing.
  2. Familiarity, awareness, or understanding gained through experience or study.
  3. The sum or range of what has been perceived, discovered, or learned
All AI can do is mimics. Maybe someday when we have PosiTronic brains like data does in Star Trek but not now.

therefore, my conclusion is, there is no intelligence behind artificial intelligence whatsoever.
 
Last edited by a moderator:

baboma

Respectable
Nov 3, 2022
284
338
2,070
It's a very impressive proof-of-concept, but anyone who has really tested it out knows that it can't be trusted like a human can be (at least not at present)

That's what a proof-of-concept means, to prove the concept works. In the case, the concept is the notion that an interface can take what we are asking, and actually give a reasonably coherent and lucid response. That's been amply proven. Whether the response is accurate or trustworthy is beyond the scope of the proof-of-concept.

...we're seeing real businesses use AIs to write content that real humans are relying on. So I think it's worth interrogating that notion.

Yes, there is a rush to apply ChatGPT and its like to real-world use, including some questionable endeavors, eg CNet's AI pieces. That is normal, and is to be expected of any new advance. Every company/business is exploring the boundary of what this "conversational AI" means, and how it could generate revenue for them. I expect Tom's Hardware will be in the queue sooner than later.

Every technology or tool can be misused. The Internet is the greatest source of information; it is also the greatest source of misinformation and scams and various ilks. Yet it has become a critical part of our lives all the same, and we've learned to discern the good things from the bad. It is not the tool that is the problem. We have to adapt to its potentials, for good and bad. It will be the same for AI.

The hype is definitely real. We've all read of Microsoft pumping billions into ChatGPT and OpenAI, to integrate the tech into its various products (and develop new ones). Likewise, Google reportedly has stopped dragging its feet--it already has a chatbot similar to ChatGPT--and is prioritizing AI as a strategic goal. I'd have to imagine all the big techs like Amazon, Apple, Meta, et al are similarly pivoting to this area.

The goal is huge: To be the next dominant platform with a conversational interface. ChatGPT will have multiple competitors. Pace of development will be fairly rapid as they all race to be the next must-have thing.

I have been thinking a lot about self-driving cars and where those stand after years of hype.

Conversational AI differs from self-driving AI in one critical aspect: The former's development is unconstrained by regulations or sensor techs or the vast capital required to build vehicles. Any company, or even individual with sufficient wherewithal, can get into the AI game, and many have and will. Competition will be intense, and pace of development will be rapid.

Yes, it will likely take a few years before we can talk to our operating systems to get tasks done, but we're not talking decades.
 
Last edited:
  • Like
Reactions: bit_user

gio2vanni86

Distinguished
May 3, 2009
122
3
18,695
I seen Linus and Luke mess with it. Hell i even tinkered with it. It's far more then just a google search. I like it. You either use it or get left behind. It honestly was more then just copy and paste. It was AI.
 
  • Like
Reactions: bit_user

jtimouri

Distinguished
Nov 4, 2008
4
2
18,515
Reminds me of a student who once tried to impress me with what he was capable of - he had written an article about a mathematics topic, but didn't understand much. The introduction looked like he had taken 10 sentences in succession from 10 different sources, he didn't even notice a change in notation.

Doesn't matter. AI will get better at doing this. A lot better. Remember how bad artificial turf looked when it first came out? Or fake tofu meat? They were jokes before, but a lot better today.
 
D

Deleted member 14196

Guest
But the bigger question is, would you put your brain in a robot body? Lol
 

Paul Dodd

Prominent
Aug 29, 2022
8
2
515
I believe you need to do far more research into the subject and you’ll come to the opposite conclusion. This is a huge thing with the big thinkers. It is not known how it works. So for you to say that is completely off base.

Without consciousness, there is no intelligence. Period.
algorithms can’t think or feel. They can’t be conscious. Not computers either so there’s no intelligence. I would love to see you prove me wrong, because then you can go up against all the panels of scientist to say you’re wrong.

prove me wrong and you’ll win the Nobel prize for sure

https://www.wordnik.com/words/intelligence

Read the first line.

noun The ability to acquire, understand, and use knowledge

The ability to acquire to understand and use the knowledge. I assert that computers will never understand the knowledge therefore, there is no intelligence. They can be taught to use data and maybe even have some sensors to acquire it, but they will never understand it. But data is not knowledge. The very definition of intelligence is violated.

knowledge is defined
knowledge

nŏl′ĭj
noun
  1. The state or fact of knowing.
  2. Familiarity, awareness, or understanding gained through experience or study.
  3. The sum or range of what has been perceived, discovered, or learned
All AI can do is mimics. Maybe someday when we have PosiTronic brains like data does in Star Trek but not now.

therefore, my conclusion is, there is no intelligence behind artificial intelligence whatsoever.
Consciousness is not needed for intelligence - imagine an encyclopedic as big as a planet created by an alien race with all the answers to our questions. It would not be conscious, even if loaded into a computer. Creativity is also no required, although nice to have.