News I Asked ChatGPT How to Build a PC. It Said to Smush the CPU.

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
762
1,760
Oh, and it can also be horribly and dangerously wrong, introducing factual errors that aren’t obvious to someone who isn’t familiar with the topic at hand.

ChatGPT just strings together pieces of text it found online.

When you prod the algorithm and ask the right question, you quickly find out ChatGPT is just copy/pasting pieces of text.

ChatGPT was just cleverly marketed as "AI" by the company behind it. Just like Nvidia cleverly marketed simple machine learning from the 70s as "AI", even though these machine learning algorithms are dumb as a rock and a Tesla still can't properly self-park after 1 decade of machine learning.

qvz59j004kca1.png
 
Last edited:
  • Like
Reactions: KyaraM and bit_user

baboma

Respectable
Nov 3, 2022
284
338
2,070
The article's criticism lacks perspective, and has misleading gaffes of its own.

The 1st ChatGPT (AI hereafter) piece is clearly written a high-level "overview" instruction, not a step-by-step build, which as mentioned further in the piece, would need a specific parts list for more detailed instruction. Anyone with a modicum of common sense would see that.

The 2nd AI piece does show the limits of detail that the AI can convey, that one still can't use it as the only guide. But it is still useful as an overview guide. It should be pointed out that the given parts list is incomplete (a case is omitted, which can significantly alter install steps) and vague (types of RAM, type and make of motherboard, etc). The operative judgment here is GIGO.

As far as gaffs and blunders, the author deliberately exaggerated the AI missteps to sensationalize his piece. Nowhere in the AI response did it use the word 'smush' (which isn't a synonym of 'press'). By now, we all accept the use of exaggerated or embellished article titles (read: clickbaits in varying degrees) for SEO purpose. We know that web sites like this depend on clicks for income, as we tolerate the practice to some extent. But if we are to point out foibles in AI-generated writing, then we should first look to our own foibles, lest we start throwing stones in glass houses.

Getting back to the AI's capabilities, I'll offer the aphorism: Ask not how well the dog can sing, but how it can sing at all. The author is nitpicking on what is a prototype. Even at its present form, the AI's guide is useful, as it forms a meaty framework that a human guide could flesh out with details, thereby shortening his workload.

ChatGPT (and its inevitable kinds) will inevitably be improved. IMO, step-by-step how-tos are not a high bar to clear. They don't require any creative thought or reasoning.
 

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
762
1,760
As far as gaffs and blunders, the author deliberately exaggerated the AI missteps to sensationalize his piece.

If anything the author is too kind on ChatGPT.

When an algorithm contradicts itself by saying:

"France has not won the 2018 World Cup. The 2018 World Cup was won by France."


...maybe it's time to admit ChatGTP lacks any intelligence whatsoever, is completely unreliable, and all it does is copy/paste pieces of text together it found online.

dgsgsgsgs.png
 

USAFRet

Titan
Moderator
The article's criticism lacks perspective, and has misleading gaffes of its own.

The 1st ChatGPT (AI hereafter) piece is clearly written a high-level "overview" instruction, not a step-by-step build, which as mentioned further in the piece, would need a specific parts list for more detailed instruction. Anyone with a modicum of common sense would see that.

The 2nd AI piece does show the limits of detail that the AI can express, that one still can't use it as the only guide. But it is still useful as an overview guide. It should be pointed out that the given parts list is incomplete (a case is omitted, which can significantly alter install steps) and vague (types of RAM, type and make of motherboard, etc). The operative judgement here is GIGO.

As far as gaffs and blunders, the author deliberately exaggerated the AI missteps to sensationalize his piece. Nowhere in the AI response did it use the word 'smush' (which isn't a synonym of 'press'). By now, we all accept the use of exaggerated or embellished article titles (read: clickbaits in varying degrees) for SEO purpose. We know that web sites like this depend on clicks for income, as we tolerate the practice to some extent. But we are to point out foibles in AI-generated writing, then we should first look to our own foibles, lest we start throwing stones in glass houses.

Getting back to the AI's capabilities, I'll offer the aphorism: Ask not how well the dog can sing, but how it can sing at all. The author is nitpicking on what is a rudimentary model. Even at its present form, the above guide is useful, as it forms a meaty framework that a human guide could flesh out with details, thereby shortening his workload. It will inevitably be improved. IMO, step-by-step how-tos are not a high bar to clear.
The problem is....some of these AI generated texts are out there in the wild.

"I have Problem X"

Long AI generated text block of how to fix "X".
Looks really good and knowledgeable. And to the person with the problem, they will go down a long road of "fixes" that may indeed cause more harm than good.

But the actual problem was Y. And a clueful human would have figured it out in the first couple of sentences of the original problem statement.\
I've personally seen this in the last few days.

Classic X/Y.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
When an algorithm contradicts itself by saying:

"France has not won the 2018 world cup. The 2018 world cup was won by France."

...maybe it's time to admit ChatGTP lacks any intelligence whatsoever and is completely unreliable.

The issue in question is not whether ChatGPT has "intelligence," but whether it can adequately convey information in conversational form. It's the next step in user interface, when we can simply talk to our device as we would a human.

Even as a prototype, its capabilities have disrupted several industries, and will transform many more. We've all read about how colleges are changing their curriculum to ward off ChatGPT essays, as well as CNet's use of AI-generated articles. I can think of many more potentially affected industries. Customer service would be a prime candidate.

Yes, at this point they are error-prone and lacking in flair, etc. But AI capabilities can be iterated and improved upon, while human capabilities generally can't.

Rather than focusing on its present deficiencies, we should all be concerned with the future implications of just how many jobs will be replaced by AI. CNet's use of AI isn't an outlier, it's only a pioneer. As generative-AI inevitably improves, there'll be more and more web sites using AI to write articles, and the articles will be of a higher quality than the average (human-written) blog pieces today. That's not saying much.
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
The issue in question is not whether ChatGPT has "intelligence," but whether it can adequately convey information in conversational form. It's the next step in user interface, when we can simply talk to our device as we would a human.

Even as a prototype, its capabilities have disrupted several industries, and will transform many more. We've all read about how colleges are changing their curriculum to ward off ChatGPT essays, as well as CNet's uses of AI-generated pieces. I can think of many more potentially affected industries. Customer service would be a prime candidate.

Yes, they are error-prone and lacking in flair, etc. But AI capabilities can be iterated and improved upon, while human capabilities generally can't.

Rather than focusing on its present deficiencies, we should all be concerned with the future implications of just how much human jobs will be replaced by AI. CNet's use of AI isn't an outlier, it's only a pioneer. As generative-AI inevitably improves, there'll be more and more web sites using them to write articles.
Will it get better?
Yes.

Is it there yet?
Not even close.
 

DavidLejdar

Respectable
Sep 11, 2022
286
179
1,860
IMO, so-called AI is way overrated (as is). I.e. one could fill the databanks with an association in the lines of: "AI is outperforming every human in having a smelly butt.", and if the bot then gets asked: "What are you perfect at?", the answer would probably be: "I excel at having a smelly butt.". Which may be funny, but hardly speaks of any intelligence when the bot doesn't know, respectively isn't aware, that it doesn't even have any butt to begin with.

And some school of psychology apparently argues that humans are no different. But even children usually don't end up listing whatever as their accomplishment just because someone else claimed it to be an accomplishment.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
Will it get better?
Yes.

Is it there yet?
Not even close.

Sure, one can play the skeptic and say that it's not something we need to worry about now. But as shown in the news, its capabilities are already something MANY people (educators, laid-off CNet journalists, et al) are worried about. Whether you are worried or not depends on what your job is.

Again, it's about much more than the potential loss of jobs.
 
  • Like
Reactions: bit_user

Dantte

Distinguished
Jul 15, 2011
173
60
18,760
IMO, so-called AI is way overrated (as is). I.e. one could fill the databanks with an association in the lines of: "AI is outperforming every human in having a smelly butt.", and if the bot then gets asked: "What are you perfect at?", the answer would probably be: "I excel at having a smelly butt.". Which may be funny, but hardly speaks of any intelligence when the bot doesn't know, respectively isn't aware, that it doesn't even have any butt to begin with.

And some school of psychology apparently argues that humans are no different. But even children usually don't end up listing whatever as their accomplishment just because someone else claimed it to be an accomplishment.

I agree with your first sentence, but the second I dont. Humans are different, we posses the ability to critically think beyond our 'programming'. The problem is tools like ChatGPT and AI in general are expected to be like humans, which it cant (yet?) and is only capable of what its programmed to do and nothing more. This is leading to another problem which is people are expecting something else, AI, to take over the critical thinking role, it can't, and in turn are becoming idiots!
 

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
762
1,760
Even as a prototype, its capabilities have disrupted several industries

Has it? CNET is getting a horrible reputation for using AI to write their articles, CNET presented false information. Their article about financial savings was completely false with false data, as shown by several outlets.

Which industry will get disrupted by giving out unreliable information? Outside of crypto scammers, I don't see how any business is served by ChatGPT.

And when your main customer seems to be scammers, which are now actively using ChatGPT to scam people, maybe some questions should be raised about the so called "benefits" of this technology capable of spreading false information on an unprecedented scale.

Cdfgdfgdgture.jpg
 
Last edited:

baboma

Respectable
Nov 3, 2022
284
338
2,070
Has it? CNET is getting a horrible reputation from this, CNET presented false information. Their article about financial savings was completely false with false data, as shown by The Atlantic.

How a tool is used isn't necessarily indicative of its capabilities.

CNet's use of AI was poor. At its present level, AI is best used to form "draft" pieces, to be edited by humans, rather than unsupervised. Reuters (and presumably AP, et al) are also using AI tools for certain types of content, to no detriment. CNet's problem is more about its current management than about AI. Read more here,

https://theverge.com/2023/1/19/23562966/cnet-ai-written-stories-red-ventures-seo-marketing

Which industry will get disrupted by giving out unreliable information? Outside of crypto scammers, I don't see how any business is served by ChatGPT.

Wait and see. I don't think we need to wait too long.

And when your main customer seems to be scammers, which are now actively using ChatGPT to scam people

Any tool can be used for good or bad. Guns can be used for defense, or to kill people. So can a hammer, or a knife, etc.
 
Last edited:
  • Like
Reactions: bit_user

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
762
1,760
Wait and see. I don't think we need to wait too long.

That sounds eerily like Tesla that used AI to teach their cars how to park, and claimed that AI would get better in time.

Meanwhile every other car company has a working parking algorithm using simple math, without any "AI / machine learning".

But Tesla cars relying on a decade of AI, still fail to self-park properly.

 

USAFRet

Titan
Moderator
How a tool is used isn't necessarily indicative of its capabilities.

CNet's use of AI was poor. At its present level, AI is best used to form "draft" pieces, to be edited by humans, rather than unsupervised. Reuters (and presumably AP, et al) are also using AI tools for certain types of content, to no detriment. CNet's problem is more about its current management than about AI.
And that IS one of the major problems.
People putting too much trust and faith in a currently flawed or inappropriate tool.

When a supposedly trusted tool or site puts up some content, and people believe and act on it, major problems can happen when that content is fundamentally flawed.

And we have people white knighting the AI, and stating that all of its problems are human induced.
 
  • Like
Reactions: bit_user

baboma

Respectable
Nov 3, 2022
284
338
2,070
That sounds eerily like Tesla that used AI to teach their cars how to park, and claimed that AI would get better in time.

I note your attempts to denigrate the subject by equating it to various unsavory elements (scammers, Tesla AI, etc). Casting aspersions is not a tactic conducive to productive conversation.

ChatGPT, or generative AI in general, has nothing to do with scammers or Tesla. You are welcome to argue on its merits or demerits.


And that IS one of the major problems.
People putting too much trust and faith in a currently flawed or inappropriate tool.

Who is?

There are fools everywhere, with or without AI. That fools exist, does not mean AI is the cause, nor does it mean we can ignore AI advances because fools can be affected.

And we have people white knighting the AI, and stating that all of its problems are human induced.

As with any new advances, there will inevitably be hype. Nobody is knighting AI or belittling humans. But a conversational interface linking us to the world's repository of information that is the Internet is indeed something to be excited about. Fools or no.
 
D

Deleted member 14196

Guest
Just stop calling it AI. the name is meaningless. There is NO intelligence
 

ottonis

Reputable
Jun 10, 2020
224
193
4,760
ChatGPT is completely unreliable. I asked it to name the best DAWs (digital audio workstations) for Windows, and ChatGPT came up with "Logic Pro", which is only available for Mac OS and certainly not for Windows.
 

HansSchulze

Prominent
Jan 20, 2023
11
1
515
You put this in the wrong thread?
Nope. The article says that ChatGPT3 didn't tell you what slot to put your graphics card into. I put graphics cards in any and all PEX slots. Even PCI did that correctly.
Just to point out that the author was grabbing at straws to technically criticize the AI.
 

PEnns

Reputable
Apr 25, 2020
702
747
5,770
And lazy college kids are tripping over themselves to use this "fancified" Google search to help them with assignments??

Yeah, "the future is so bright, I gotta wear shades. "
 
Last edited by a moderator:
  • Like
Reactions: ottonis

bit_user

Titan
Ambassador
ChatGPT just strings together pieces of text it found online.
Not randomly. It actually builds higher-level representations of the underlying information that (hopefully) gets rid of inconsistencies and errors... unless the errors are more frequent than the correct information and it can't otherwise deduce what's right. In this regard, it's really not so much worse than a human.

But, the key point is that it's limited by the quality of the data you train it on.

you quickly find out ChatGPT is just copy/pasting pieces of text.
It's not copying/pasting any more than your brain does. It actually encodes this information in its model, and resynthesizes the text as needed. A better way to describe it is memory/recall. Again, like what your brain does, when you learn facts and then try to remember them on demand.

LOL. "Don't mess with Texas." Clearly, it learned too well that Texas is the biggest, best, and most superlative!
: D
 
Last edited: