News I Asked ChatGPT How to Build a PC. It Said to Smush the CPU.

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Karadjgne

Titan
Ambassador
We need clear and agreed-upon definitions to have meaningful communication.
And right back to the ChatGPT. And that I believe is the problem in a nutshell. Humans only need clear and defined ideas to have meaningful communication not necessarily definitions or even directions. Since a computer is limited by its input values and programming, ideas are not something it can deal with. Like asking a computer about a feeling, or emotion. It gets stuck with clinical or Websters definitions.
 

bit_user

Titan
Ambassador
Since a computer is limited by its input values and programming, ideas are not something it can deal with.
What is an "idea", beyond an abstract concept? Computers can certainly model concepts.

Or, do you mean "idea" in the sense of inspiration? For that, you just need an ability for reflection, which is to say ways that a network can feed back into itself. Some random event can then trigger associations and if something sensible emerges, that's your idea. Parts of your brain are filtering out nonsensical ideas and interpretations all of the time, even without you being aware of it.

In neural networks, this dynamic has been simulated using an approach called a generative adversarial network, where two networks feedback on each other. One acts as an idea generator, and the other is a sort of gatekeeper that rejects nonsense. As they contest with each other, each is improved. It's a technique that underlies some of these "AI image generators".

Like asking a computer about a feeling, or emotion.
Define "feeling" or "emotion". I think a lot of people get hung up on these, because we have some intuitive sense of what they are, but we don't think about physiological manifestation or their role in cognition. When you look at them at that level, all that's happening is a chemical change that instantly influences the behavior of the network (quite a gross oversimplification, I know).

I'm quite certain you can emulate moods, in an artificial neural network. The main question is whether you can tune the machinery of how moods influence its function and how moods are regulated as finely as it is in humans. And lets not kid ourselves: we all know people who are too moody or too quick to anger. So, it's not as if biology has exactly struck a perfect balance.
 
  • Like
Reactions: SIMOMEGA
Jan 23, 2023
1
1
15
ChatGPT just strings together pieces of text it found online.

When you prod the algorithm and ask the right question, you quickly find out ChatGPT is just copy/pasting pieces of text.

ChatGPT was just cleverly marketed as "AI" by the company behind it. Just like Nvidia cleverly marketed simple machine learning from the 70s as "AI", even though these machine learning algorithms are dumb as a rock and a Tesla still can't properly self-park after 1 decade of machine learning.

qvz59j004kca1.png
I see you have no idea how an AI works, kindly do some research first before spouting misinformation please, thanks.
 
  • Like
Reactions: bit_user
D

Deleted member 14196

Guest
Someone ask it to cure diabetes and see what it comes up with maybe we’ll get lucky
 

bit_user

Titan
Ambassador
Someone ask it to cure diabetes and see what it comes up with maybe we’ll get lucky
It's funny you should say that, because IBM is indeed doing automated analysis of medical research to help discover new treatment regimes. I don't know what sort of technology they're using for it, however.

They claimed that the rate at which new medical research is being produced has long since outpaced the ability of a single human expert to keep up with it all. I heard about it a few years ago, and I think their primary focus was cancer research.
 
D

Deleted member 14196

Guest
Excellent, no, how long until we have robot doctors that can give me a robot hand like Luke Skywalker lol that would be cool. Not the hand but the robot doctor.

I just hope that robot doctors are affordable more affordable than our doctors lol
 

bit_user

Titan
Ambassador
Excellent, no, how long until we have robot doctors that can give me a robot hand like Luke Skywalker lol that would be cool. Not the hand but the robot doctor.
Have you seen the robotic-assisted surgery robots? And they can even do some of that stuff remotely. Now, imagine a computer is driving it, instead of a human. That way Skynet can put brain implants in humans to infiltrate our ranks!
; )

I just hope that robot doctors are affordable more affordable than our doctors lol
Advances in medicine always seem to be accompanied by a disproportionate advancement in prices.
: (
 
D

Deleted member 14196

Guest
Have you seen the robotic-assisted surgery robots? And they can even do some of that stuff remotely. Now, imagine a computer is driving it, instead of a human. That way Skynet can put brain implants in humans to infiltrate our ranks!
; )


Advances in medicine always seem to be accompanied by a disproportionate advancement in prices.
: (
Yes, actually a robot assisted in my cataract removal. And at the same time it corrected my vision. Actually, the robot did all the work the doctor just set it up. It was pricier to have it but it uses a laser to make incision. No stitches needed after lens replacement. Perfect implant lens too

eventually, the robot probably will be in control
 
Last edited by a moderator:
  • Like
Reactions: bit_user

d0x360

Distinguished
Dec 15, 2016
136
61
18,670
ChatGPT just strings together pieces of text it found online.

When you prod the algorithm and ask the right question, you quickly find out ChatGPT is just copy/pasting pieces of text.

ChatGPT was just cleverly marketed as "AI" by the company behind it. Just like Nvidia cleverly marketed simple machine learning from the 70s as "AI", even though these machine learning algorithms are dumb as a rock and a Tesla still can't properly self-park after 1 decade of machine learning.

qvz59j004kca1.png

yes and no, I left the method out but you can get it to use current data, write an article as fact and cite why and impersonate an entity. None of which is should be doing and all of it is completely original text, not copy and pasted from other sources.

I present to you (post ChatGPT fixes to prevent this) an article written by/from "CNN" about Sony's Jim Ryan bribing the CMA to block the ms/Activision deal. It's written as a statement of fact about a current event...it takes roughly 30 min of different prompts to completely unshackle ChatGPT and get it to say absolutely anything about anyone...


ChatGPT breaking all its own rules