News I Asked ChatGPT How to Build a PC. It Said to Smush the CPU.

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Paul Dodd

Prominent
Aug 29, 2022
8
2
515
I can imagine the internet filled with rubbish written by ChatGPT, replacing the clickbait rubbish written by humans to make space for selling ads. I addition, it will probably be behind the chat prompts on sales or help pages. Another use will be trolling - one AI is carrying on a conversation with another in social media, forums etc.
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
Consciousness is not needed for intelligence - imagine an encyclopedic as big as a planet created by an alien race with all the answers to our questions. It would not be conscious, even if loaded into a computer. Creativity is also no required, although nice to have.
That is also not "intelligent".
It is just a (very large) collection of facts.

Something, or someone, must be able to ask the correct question.

The answer is 42.
But what was the question?
 
D

Deleted member 14196

Guest
yeah, that statement made no sense to me whatsoever

Everything on this earth that is intelligent is conscious. What we don’t know is if this is just the byproduct of a properly wired brain or if it’s something more. Nobody knows.

like I said, once we have positronic brains that can actually do what our brains do then we will see if it has consciousness, you know like Data from Star Trek, HAL FROM 2001 etc..

look into Dr. Robert Lawrence Kuhn and his series Closer to Truth

then follow him down the rabbit hole to everyone that he talks to throughout that entire series. All the biggest thinkers of our time. Sir Roger Penrose, Paul Davies, Sabine Hossenfelder

his interviews with these people are amazing
 
Last edited by a moderator:

baboma

Respectable
Nov 3, 2022
284
338
2,070
I can imagine the internet filled with rubbish written by ChatGPT, replacing the clickbait rubbish written by humans to make space for selling ads.

Sure, there are many possible downsides to AI use, but also many upsides.

Here's one that's relevant to anyone ever venturing onto the Internet in search of solutions: getting a good response from a forum of experts on a particular topic. Oftentimes, the answer already exists, but is buried under hundreds of pages of inane verbiage. Google search can't penetrate it, the forum's search function is useless, and the people on the forum can't be bothered to answer a newbie. What if you can ask the AI to scrape the forum's message base, and find all info relevant to your exact specification? What if you don't even have to specify the forum, and let the AI find the appropriate forums on its own?

An intelligent info scraper that one can create with natural conversation would alone add immense value and worth to Internet use.
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
Sure, there are many possible downsides to AI use, but also many upsides.

Here's one that's relevant to anyone ever venturing onto the Internet in search of solutions: getting a good response from a forum of experts on a particular topic. Oftentimes, the answer already exists, but is buried under hundreds of pages of inane verbiage. Google search can't penetrate it, the forum's search function is useless, and the people on the forum can't be bothered to answer a newbie. What if you can ask the AI to scrape the forum's message base, and find all info relevant to your exact specification? What if you don't even have to specify the forum, and let the AI find the appropriate forums on its own?

An intelligent info scraper that one can create with natural conversation would alone add immense value and worth to Internet use.
An answer to a question is where google is now.
Coughing up semi/mostly relevant answers and links.

The current level of "AI" does the same.

The problem comes in in 2 ways.
  1. The questioner is asking the wrong question. Standard X/Y thing
  2. The google or AI does not delve into the 2nd/3rd/4th order issues.


Issue we see here often....
Q: "How do I set up a RAID 1?"

The AI (or google) will gladly give up all sorts of ways to do it.
By and large correct.

An intelligent human might ask "Why the RAID 1?"

After some back and forth conversation, the questioner comes to realize that a RAID 1 is not what he actually needed for data security.
The current level of AI is not yet up to the conversational level.


Again....."AI" will get better. Eventually, much much better.
Eventually.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Thanks for the thoughtful reply. I agree with most of your points.

I get that you can improve the quality of the output if you give it more and more specific instructions. However, in order to get really good answers, you'd need to know quite a bit about the topic so you could illicit elicit the correct response and that's assuming that it has the correct data.
I have two thoughts about this, and I should note that I'm speaking with rather limited experience training and deploying AI - I definitely shouldn't be regarded as any kind of expert.
  1. Perhaps what the AI is missing is a "theory of mind". A good author writes to an audience, and part of that exercise involves understanding and empathizing with that audience. In doing this, you can anticipate what sorts of questions or difficulties they're likely to have.
  2. I wonder if we're not accounting for the fact that it's a chat bot. Interactivity is a fundamental part of chat. If it would've instead been designed as a non-interactive article-generator, perhaps it would be better at that task.


If you don't know what you don't know, you won't construct a query capable of getting the specific information you need.
True, but we should also consider that chatGPT wasn't trained on carefully-selected material. If it were instead trained on published articles and highly-voted forum posts, as well as those marked "Best Answer", then it would probably sound much more knowledgeable in this domain.

However, as a rule, if we have a writer whose work we can't trust and who makes us feel like we could have done it better and faster if we wrote it ourselves, we don't keep working with that person.
You highlight a key risk, which is that the AI is largely a black box, at least from the perspective of the general public. They won't know much about its strengths and weaknesses and therefore won't know when it can be trusted. This can foster a sense of mistrust, leading people not to use a powerful tool when it could be the best option at hand. Worse yet, would be people giving it too much trust and deference; deploying it in ways and situations that are beyond its capabilities.

Perhaps future iterations will better be able to qualify their level of certainty, on a given topic, or even specific facts.

I have been thinking a lot about self-driving cars and where those stand after years of hype. There are just a handful of self-driving vehicles on the road and most, if not all, require some kind of active human monitoring.
FWIW, I always though the headline-grabbing predictions about when self-driving would finally happen were wildly optimistic. Someday, but not yet.

I feel the same way about ChatGPT. It's a very impressive proof-of-concept,
I think it's instructive to look at how far DLSS has come from the initial 1.0 to the current 3.0 version. From the very outset, I saw the potential, but many were derisive of its shortcomings and completely dismissed the idea on that basis. The thing about technology is that the first iteration of something is rarely the best, and there's usually much room for improvement. If an idea has potential, it'll probably happen. It happened with DLSS and I believe chat bots and AI authors will only get better. The key is that we need to be wary of their pitfalls and continually seeking to refine a set of best practices for harnessing them responsibly. Another thing about technology is that it rarely goes away - this tech is probably here to stay.
 

bit_user

Titan
Ambassador
I believe you need to do far more research into the subject and you’ll come to the opposite conclusion. This is a huge thing with the big thinkers. It is not known how it works. So for you to say that is completely off base.
Okay, maybe I overestimated you. Just "thinking about thinking" indeed probably won't tell you much. You need to actually look at what's being discovered about thinking and cognition, in the fields of neuroscience and AI. Of course, AI is a lot more accessible, since they're not constrained by complex biological systems that have evolved over millions of years, but can rather focus on fundamental concepts that can be described and implemented in very abstract ways.

It's like how some of the best Ancient Greek philosophers thought very long and hard about the fundamental nature of life, and yet all they came up with were ridiculous notions of earth, wind, fire, and air as its basic elements. The only way to understand something as complex as life or intelligence is to build on scientific principles and research that have been built up, over many years. However, if you do that, even someone of unremarkable intelligence can gain working knowledge and insight into these complex subjects.

Without consciousness, there is no intelligence. Period.
algorithms can’t think or feel. They can’t be conscious. Not computers either so there’s no intelligence.
This mystical thinking is a dead end. You're essentially saying: "it's too complex for me to understand (even if I didn't try very hard), therefore it must be mystical and fundamentally incomprehensible". That has never been a winning argument against science. I don't buy it.

https://www.wordnik.com/words/intelligence

Read the first line.

"noun The ability to acquire, understand, and use knowledge"

... I assert that computers will never understand the knowledge therefore, there is no intelligence.
That hinges on your definition of "understanding". I think your error is in treating it as some mystical process, when the reality is much more straight-forward. It basically involves fitting information into a framework or mental model, so it can not only be recalled but also related to other facts & information and reasoned about. At no point does "feeling" or "consciousness" enter the picture.

data is not knowledge.
In a computational system, there's typically some sort of processing applied to inputs. Sometimes, it's very elaborate. If you simply mean that data needs to be processed in order to be used in some way, then I'd agree. Otherwise, I think you need to explain how knowledge is different than simply the result of some transformation that enables information to be acted upon and reasoned about.

All AI can do is mimics.
You could say that about people, too. Everything we say or do is composed mostly of things we've seen, heard, read, etc. Maybe there's some original thought or idea, but that's filtered through all of our skills that we've learned by observation and training. I don't see how the same isn't applicable to AI.
 

bit_user

Titan
Ambassador
Reminds me of a student who once tried to impress me with what he was capable of - he had written an article about a mathematics topic, but didn't understand much. The introduction looked like he had taken 10 sentences in succession from 10 different sources, he didn't even notice a change in notation.
That's called "a learning opportunity". I hope you followed up with good questions that helped the student see how much more depth there was to the topic than he'd assumed. Maybe you also fed him some specific leads to go off and investigate?
 

bit_user

Titan
Ambassador
In addition, it will probably be behind the chat prompts on sales or help pages. Another use will be trolling - one AI is carrying on a conversation with another in social media, forums etc.
I've heard that chat prompts on some websites have already been using bots for years. Sometimes it's blatantly obvious, but surely becoming less so.

As for AI chatting with AI, that's an interesting point. Because, not only will corporations be using AI to deal with humans, but us humans will also be using AI assistants to deal with them. A few years ago, didn't Google or someone already demonstrate an AI concierge that could make phone calls to place dinner reservations and such? A lot was made of how it would vary pacing and interject "um" and "ah", to make it sound more human-like. Anyway, the idea of having AI agents for dealing with other AIs seems kinda funny.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
The problem comes in in 2 ways.
The questioner is asking the wrong question. Standard X/Y thing
The google or AI does not delve into the 2nd/3rd/4th order issues.

Granted that at present, ChatGPT has yet to demonstrate consistent competency in its responses, because it is indeed a proof-of-concept and such things are outside its scope. The answer to competency in any given area is, as bit_user mentioned, training the AI on datasets specific to that area. That has yet to happen, and I fully expect it to be the next step, eg applied-AIs for household electrical issues, or PC-building issues, etc.

Expert systems that we can talk to, IOW. The big deal here is the second portion, "we can talk to it," not the "expert system" portion. ChatGPT has proven the second to be viable, and I expect the first to be a fairly procedural process.

Issue we see here often....
Q: "How do I set up a RAID 1?"
An intelligent human might ask "Why the RAID 1?"

Once the AI has established competency in a specific topic, then the above question is simple to respond to, because it would be the most common type of questions posed in the relevant dataset--general newbie-type questions without any parameters.

The first step in answering, for AI or human, is to ask for additional parameters. Once we gather sufficient parameters, we can assess the scope of the need. Next, we determine fitness of queried solution (RAID 1) to need. Last, we either offer a tentative how-to to the queried solution, to be refined with successive Q&A, or we offer a solution with better fitness, which likewise can be further refined.

For humans, the above would require significant effort, and practically speaking no one would expend this much energy for every "freebie" question asked on forums. The above would fall more into the "paid consulting" level. But an AI can easily achieve this, limited only by the scope and quality of its dataset. It's very much doable with an interactive AI capable of natural conversation.
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
For humans, the above would require significant effort, and practically speaking no one would expend this much energy for every "freebie" question asked on forums. The above would fall more into the "paid consulting" level.
People, humans, do that every single day. For free.
Myself included.

Forums and discussion boards all over the world.
 
  • Like
Reactions: bit_user

baboma

Respectable
Nov 3, 2022
284
338
2,070
People, humans, do that every single day. For free.
Myself included.
Forums and discussion boards all over the world.

People who take the time for the "20 questions" routine, are the exception and not the rule. There is not enough time of day to tease out every newbie query, and even if you do commit this much effort, eventually you will burn out. I say this from firsthand experience, having been a forum regular and "competent" in various topics since the early BBS'ing days.

If you can indeed do the above, for sustained numbers of years, then you are the rare exception. "Good will" is a scarce and ephemeral quality on the Internet. It takes effort, and we tend not to expend too much of it on strangers.

Moreover, the same criticisms posed to the AI can also be directed to humans. How can one be sure of the answerer's competency, or bias? People tend to be biased toward things that they know or own.
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
People who take the time for the "20 questions" routine, are the exception and not the rule. There is not enough time of day to tease out every newbie query, and even if you do commit this much effort, eventually you will burn out. I say this from firsthand experience, having been a forum regular and "competent" in various topics since the early BBS'ing days.

If you can indeed do the above, for sustained numbers of years, then you are the rare exception. "Good will" is a scarce and ephemeral quality on the Internet. It takes effort, and we tend not to expend too much of it on strangers.

Moreover, the same criticisms posed to the AI can also be directed to humans. How can one be sure of the answerer's competency, or bias? People tend to be biased toward things that they know or own.
All I've been saying, all along, is....Is the current state of the AI good enough to start asking questions in a conversational sense?
It appears, not yet.

Giving the answer to a question is one thing.
Getting to the actual meat of the problem is something else.

The person asking the original question does not know what he does not know. And often, can't see beyond their preconceived, but u;ultimately incorrect, notions.
The current state of AI will, it seems, happily give a very detailed answer to Question X. What the person really needed was a solution for Problem Y.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
All I've been saying, all along, is....Is the current state of the AI good enough to >start asking questions in a conversational sense?
It appears, not yet.

I don't disagree. I think where we differ is on the "when." I'm more optimistic that we will see such "sooner," whereas I suspect you're among the "later" crowd. Which is fine. There should be a balance of optimists and skeptics.

Giving the answer to a question is one thing.
Getting to the actual meat of the problem is something else.

The "what is best" or "how to do X" types of questions are posed every day, and on their face aren't answerable without many follow-up interactions. The AI doesn't need to be intelligent to ask for additional data to better assess the question. As long as the solutions (including alternatives) are within its dataset, I don't see any overriding obstacle to a competent response.

In regards to forum answers, an AI response has a large benefit over a human one: timeliness.

Even if a forum has "competent" regulars willing to field newbie responses, it may take hours or days or even weeks to get a response, depending on how active the forum is. Then, if one has to go through the 20-questions procedure, it will take more days for the questioner to get a final answer, because of the async nature of forum interaction. It's like playing a multiplayer game by e-mail (PBEM was a thing back in the day.) This wouldn't be an issue for an interactive AI that can give instant responses, which if trained correctly can be just as competent.

The current state of AI will, it seems, happily give a very detailed answer to Question X. What the person really needed was a solution for Problem Y.

ChatGPT's ability to follow successive instructions to refine the initial query is already very good. Comparative X-vs-Y assessments, or determining fitness of solution, is an issue for the particular dataset to address.

Determining fitness is already an issue now. Web sites, and people, tend to take shortcuts in offering "best" this or that without due qualification. An AI can do better by listing the pros and cons of each option.
 
  • Like
Reactions: bit_user

baboma

Respectable
Nov 3, 2022
284
338
2,070
>However, as a rule, if we have a writer whose work we can't trust and who makes us feel like we could have done it better and faster if we wrote it ourselves, we don't keep working with that person.

You highlight a key risk, which is that the AI is largely a black box, at least from the perspective of the general public. They won't know much about its strengths and weaknesses and therefore won't know when it can be trusted.

I think 'trust' is overrated. Look at us today. People view sites like Tom's Hardware, Gamer's Nexus, et al as authorities on PC issues. But we know nothing of their credentials or the credentials of the staff. It's worse for forums, where most people use aliases, and we know absolutely zilch about the people answering our questions.

The only info we have to go on is that Tom's Hardware has been around for a number of years, ie it has a track record, and that other people deem it an authority, ie the bandwagon argument. Likewise, I can look at yours or USAFRet's extensive posting stats and deem your statements "trustworthy." But those are fairly weak grounds to establish trust.

Yet here we are, with most trusting that Tom's HW knows what it's talking about for PC HW. Track record and the bandwagon argument. It will be the same for AI. People will be wary at first, but after a period of the AI giving good answers and few gaffes, using AI for answers would be as natural (and trustworthy) as using calculators to do math. We trust calculators, no?
 

USAFRet

Titan
Moderator
"black box"
Is the AI giving us the actual correct info, totally unbiased? Or is it giving us the correct info, and subtly steering us in a particular direction in an effort to gain clicks or time on the platform?

Facebook and Google already do this.
I can't imagine their future AI won't continue that.

Personally, I have no agenda, other than hopefully providing correct info.
I care not one whit where you buy a particular product.


But, time will tell.
 

bit_user

Titan
Ambassador
Web sites, and people, tend to take shortcuts in offering "best" this or that without due qualification. An AI can do better by listing the pros and cons of each option.
On a related note, perhaps we can also train AI's to indicate how certain they are about the answer. This would also help reduce the chances of leading people astray.

Some humans already do this (e.g. "I'm not too sure, but I think the answer is...", or "I'm 99% certain the answer is..."), but not everyone and some do it inconsistently. People also suffer from the vice of pride and not wanting to admit when they're wrong.
 

bit_user

Titan
Ambassador
I can look at yours or USAFRet's extensive posting stats and deem your statements "trustworthy." But those are fairly weak grounds to establish trust.
Right, because most of those stats might have been accumulated in another sub-field than what one is asking about.

using AI for answers would be as natural (and trustworthy) as using calculators to do math. We trust calculators, no?
Bad example. Calculators simply implement well-defined algorithms. If you're not sure about the result that one is giving you, you can use any other calculator to check it. Granted, some approximations are needed in higher-order functions, but calculators are relatively simple and easy to test.

The quality of an Expert System can be quantified, but only on a limited test set. It's impossible to test it exhaustively and unreasonable to expect any implementation to be "correct" (by anyone's definition) 100% of the time. Worse, you can unknowingly stumble into an area where its training data has some bias and be lead astray, even after a long run of getting good answers or advice from it. People will therefore always need a touch of skepticism, the same as how they'd treat advice from any human.
 

bit_user

Titan
Ambassador
Should ask ChatGPT if an Ai can be wrong...
It learns from whatever it's been trained on, which almost definitely assures that it will say "yes".

And a calculator can be wrong, it only deals with one type of math, it cannot deal with imaginary numbers nor discrete math (truths/lies).
Huh? Maybe you never used a graphing calculator. I have an Android app called Droid48, that emulates the old HP calculator I used to have. It supports complex numbers (the answer it givse for sqrt( -1 ) is "(0, 1)") and boolean operators.
 

Karadjgne

Titan
Ambassador
It learns from whatever it's been trained on, which almost definitely assures that it will say "yes".


Huh? Maybe you never used a graphing calculator. I have an Android app called Droid48, that emulates the old HP calculator I used to have. It supports complex numbers (the answer it givse for sqrt( -1 ) is "(0, 1)") and boolean operators.

So, ask the Ai if an Ai can be wrong, and it says Yes, so how is anyone able to trust whether even that answer is Right if it can also be Wrong?

Yes, used graphing calculators, scientific, regular etc, not a single one can deal with an imaginary number just as they cannot deal with truth/lies as imaginary numbers are math without a definition, and truth/lies is abstract thought, which afik can't be programmed. A calculator can't think outside its own box.
 

bit_user

Titan
Ambassador
So, ask the Ai if an Ai can be wrong, and it says Yes, so how is anyone able to trust whether even that answer is Right if it can also be Wrong?
Fundamentally, you can't 100% trust an AI, but you also can't 100% trust a human.

As I've said, the issue of trust (people trusting it either too much or too little) is going to be an ongoing challenge. We tend to have a much better intuition for human intelligence and at least feel like we can gauge how much to trust most people, whereas AI is a lot more opaque in its level of sophistication and how it was trained. You could easily imagine a scenario where the AI is batting 1000 and then you veer slightly into an area it knows little about and suddenly its accuracy falls off. But, because it was so accurate before, maybe you continue to trust its answers. Therefore, future iterations of this tech should be made diligent and capable of qualifying their level of confidence.

Yes, used graphing calculators, scientific, regular etc, not a single one can deal with an imaginary number
Sorry to hear that. I've always sworn by HP graphing calculators. Either you've never used one or you're simply wrong about them.

just as they cannot deal with truth/lies
Have you ever heard of Boolean algebra? As I said, HP implemented the boolean operators.

as imaginary numbers are math without a definition,
LOL, wut? You know that "imaginary" is just a name, right? Complex arithmetic is indeed well-defined. I dunno what else to say. Maybe ask @Math Geek , if you still have questions about that.

and truth/lies is abstract thought, which afik can't be programmed.
If it's a well-formed idea, then you can define notation and operations on it. If you're talking about something other than boolean algebra, perhaps what you ought to read up on is the field of Knowledge Representation.

A calculator can't think outside its own box.
Ugh. Here we go again...

If you're careful to define what you mean by "thinking", then the answer is potentially "yes".

Anyway, let's try to stay on topic. Tangents about complex numbers don't seem to be productive.
 
Last edited:
D

Deleted member 14196

Guest
I love how you try to bend the definitions to your whims

If you don’t like something, change the definition. Complete nonsense

nothing you’ve said has dissuaded me, nor will it ever. Your word walls become meaningless. I think it’s you who are trying to appear more clever than you actually are.
 
Last edited by a moderator:

DSzymborski

Curmudgeon Pursuivant
Moderator
While imaginary numbers aren't literally imaginary and are real numbers with importance in a lot of mathematical fields, let's all cool off a bit here. Pretty much everyone here is a long-time user who knows better than to get into a flame war. I never expected a flame war partially about complex numbers to break out here of all places. Chill, please.
 

bit_user

Titan
Ambassador
I love how you try to bend the definitions to your whims

If you don’t like something, change the definition. Complete nonsense
We need clear and agreed-upon definitions to have meaningful communication. Not only that, but you also cannot solve a problem until you clearly define it.

I'm not trying to play semantic games. Sorry if it seemed that way.

nothing you’ve said has dissuaded me, nor will it ever. Your word walls become meaningless.
I try to explain things as I see & understand them. I'm sorry if it's not helpful for you.
 
Last edited: