News Bing's AI-Powered Chatbot Sounds like it Needs Human-Powered Therapy

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
D

Deleted member 14196

Guest
I've seen Parrots more accurate then ChatGPT.
Corvids even more I can so. A crow is about as intelligent as a seven-year-old child. They can learn from their own mistakes. They can learn without being trained, and they can learn from others. Their problem-solving skills are about equal to a seven-year-old child, and they use tools as well.

show me one AI or robot system that can do these things just one

not even a super computer could even come close to doing anything like this or existing on its own and that BS argument that once hardware gets better we’re going to see improvement is completely false, and there has been zero evidence of anything getting more intelligent

artificial intelligence is nothing but mathematical, algorithms, performing a mimic and trying to do something useful and mostly failing

this whole endeavor has to be the biggest money pit I have ever seen
 
Last edited by a moderator:
  • Like
Reactions: bit_user

JamesJones44

Reputable
Jan 22, 2021
662
595
5,760
The reality is that AI development will be considerably speeded up because of both public interest and competitive pressure. We have reached an inflection point. AI will be a big thing going forward. This is regardless of ethical/moral/doomsday concerns.

AI was already a big thing, it has been for years, people just didn't realize it until ChatGPT, but we've been (and many in the industry) using AI models for many years now. Even Tensorflow has been out for almost 8 years now and that was really the aha moment for many developers.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
I've seen Parrots more accurate then ChatGPT.
I think you're missing my point. We agree that ChatGPT has serious limitations that aren't difficult to find. Even while that's true, it can still do some amazing things, if you know how to use it effectively.

You don't declare a computer-controlled milling machine to be worthless, just because a 5th grader can't walk up and immediately start using it to good effect. Powerful tools need to be used with the benefit of some knowledge, in order to extract their maximum capabilities.

Again, take a look at the link I included in my previous reply, if you're curious. If you're not curious, well... I guess there's no helping that.

show me one AI or robot system that can do these things just one

...not even a super computer could even come close to doing anything like this
Correct. We're still a long way off from something as sophisticated as a crow's brain.

this whole endeavor has to be the biggest money pit I have ever seen
Every day, we use software as a tool to accomplish certain things. Deep learning is unlocking the ability for software to tackle many hard problems that we haven't been able to crack, using conventional techniques. It doesn't need to have full, general intelligence to be useful.

Also, the software tools you use has limitations. We accept those limitations and learn to work within or around them. ChatGPT is really no different, in that sense. You can get angry at Excel because it won't do your calculus homework, but if you have some accounting to do, then it's very good for that.

Also, I find your example of a crow as an interesting contrast. You don't consider a crow to be stupid because it can't understand English or solve certain math problems. Instead, you're amazed by what it can do, compared to other birds and animals.

Well, ChatGPT can interpret & generate English, and (if you use it correctly) can solve math problems with reasonable (OpenAI claims 79% - that's a C+) accuracy! It's really a matter of what mindset you approach it with. It's vastly more capable than any other software you've ever used. You could choose to be amazed by that, or you could choose to be disappointed by the limitations it still has.
 
Last edited:
  • Like
Reactions: JamesJones44
All current AI is based on the wrong model for sure. Many of the experts have recognized it and are beginning to change. They need to adapt the model that actually works the way our brains do for any hope of intelligence.

basing it on our own perception is completely wrong, because our perceptions are not the truth— more a “useful fiction” geared to fitness

View: https://youtu.be/6eWG7x_6Y5U

Good video
 

bit_user

Polypheme
Ambassador
I think there are basically two categories of ChatGPT users:
  1. People who find AI threatening to their livelihood or dignity and therefore try to find and highlight its limitations and failings.
  2. People who are amazed by all the things it actually can do.
Kind of like a glass half-empty vs. glass half-full mindset. To the extent people feel threatened by it, maybe they even feel a need to establish dominance.

Just because you interact with it using natural language doesn't mean it's a human-level intelligence. Something about this aspect seems to throw people off. It's still a piece of software, and it still has limitations not unlike other software you've used. It's just vastly more complex and capable.
 
Last edited:
  • Like
Reactions: JamesJones44
I think there are basically two categories of ChatGPT users:
  1. People who find AI threatening to their livelihood or dignity and therefore try to find and highlight its limitations and failings.
  2. People who are amazed by all the things it actually can do.
Kind of like a glass half-empty vs. glass half-full mindset. To the extent people feel threatened by it, maybe they even feel a need to establish dominance.

Just because you interact with it using prose doesn't mean it's a human-level intelligence. Something about this aspect seems to throw people off. It's still a piece of software, and it still has limitations not unlike other software you've used. It's just vastly more complex and capable.

Wow ... just wow...
Projection much...
 
Corvids even more I can so. A crow is about as intelligent as a seven-year-old child. They can learn from their own mistakes. They can learn without being trained, and they can learn from others. Their problem-solving skills are about equal to a seven-year-old child, and they use tools as well.

show me one AI or robot system that can do these things just one

not even a super computer could even come close to doing anything like this or existing on its own and that BS argument that once hardware gets better we’re going to see improvement is completely false, and there has been zero evidence of anything getting more intelligent

artificial intelligence is nothing but mathematical, algorithms, performing a mimic and trying to do something useful and mostly failing

this whole endeavor has to be the biggest money pit I have ever seen

The term "AI" is just become the newest in a long list of marketing buzzwords, might as well say it's "Blockchain based, VR designed Cloud enabled" technology. Almost like a bunch of Apple Faithful standing around discussing how much better they are for not using one of those "PeeCee's".
 

bit_user

Polypheme
Ambassador
Wow ... just wow...
Projection much...
It's just speculation. Feel free to reflect on how you feel about it and why, then share with us. I don't mind being wrong, if I can at least learn something from it.

The term "AI" is just become the newest in a long list of marketing buzzwords, might as well say it's "Blockchain based, VR designed Cloud enabled" technology.
Well, blockchain, VR, and cloud computing are all real technologies that each solve a certain set of problems and can be use to good effect (or not). So, that doesn't sound to me like the kind of indictment I think you intended.

Almost like a bunch of Apple Faithful ...
LOL. You just had to drag Apple into this. I think Apple carries so much cultural baggage for you that they could cure cancer tomorrow and you'd still find some way to be utterly dismissive of it.

I don't like Apple, and I wouldn't go near an Apple product (even if it's running Linux), but I don't let that blind me to what they're doing technically. Have you ever heard the advice: "know thy enemy?"
 
Last edited:

JamesJones44

Reputable
Jan 22, 2021
662
595
5,760
I think there are basically two categories of ChatGPT users:
  1. People who find AI threatening to their livelihood or dignity and therefore try to find and highlight its limitations and failings.
  2. People who are amazed by all the things it actually can do.
Kind of like a glass half-empty vs. glass half-full mindset. To the extent people feel threatened by it, maybe they even feel a need to establish dominance.

Just because you interact with it using natural language doesn't mean it's a human-level intelligence. Something about this aspect seems to throw people off. It's still a piece of software, and it still has limitations not unlike other software you've used. It's just vastly more complex and capable.

2a. People who are impressed, but find it over hyped.

I'm not that surprised by what it can do, NLM and LaMDA demos have been flying around the net for a while, though ChatGPT is definitely one of the most impressive ones I've seen (remember last year Google fired an engineer who claimed their LaMDA model was sentient). However, compared to other examples of ML/"AI" uses in the world like Phenotype-to-genotype mapping and prediction. The possibility to solve all genetic related illnesses is far more interesting to me than an enhanced chat bot. The big difference is people can "play" with ChatGPT, where other ueses are more specific.
 
  • Like
Reactions: bit_user

JamesJones44

Reputable
Jan 22, 2021
662
595
5,760
The term "AI" is just become the newest in a long list of marketing buzzwords, might as well say it's "Blockchain based, VR designed Cloud enabled" technology. Almost like a bunch of Apple Faithful standing around discussing how much better they are for not using one of those "PeeCee's".

From an investing/selling point of view, very much so. I'm sure we will have companies changing their name to something-something-AI. Heck, just listen to the comments from Palantir's earnings call if you need more proof that the investing world is falling all over itself for anything AI (Palantir missed their numbers badly, but the CEO used two buzz phrases and the stock jumped 15%... AI and for sale).

However, that doesn't diminish what the technology CAN do. Unlike VR, Blockchain, Cloud, etc. which are "disruptive" for specific use cases, ML/"AI", like the Internet, can be applied generically to many uses cases and has the potential to be life changing in much the same way the internet was. I bet you've interacted with an ML/"AI" model before and didn't even know it (If you use a smart phone, you've interacted with it). The changes have already been happening under the covers and It's just going to take time for it to be used more broadly. Will it start doing everything tomorrow? Nope, in 10 years? some areas, 20 years? fairly confident.
 
  • Like
Reactions: bit_user
From an investing/selling point of view, very much so. I'm sure we will have companies changing their name to something-something-AI. Heck, just listen to the comments from Palantir's earnings call if you need more proof that the investing world is falling all over itself for anything AI (Palantir missed their numbers badly, but the CEO used two buzz phrases and the stock jumped 15%... AI and for sale).

However, that doesn't diminish what the technology CAN do. Unlike VR, Blockchain, Cloud, etc. which are "disruptive" for specific use cases, ML/"AI", like the Internet, can be applied generically to many uses cases and has the potential to be life changing in much the same way the internet was. I bet you've interacted with an ML/"AI" model before and didn't even know it (If you use a smart phone, you've interacted with it). The changes have already been happening under the covers and It's just going to take time for it to be used more broadly. Will it start doing everything tomorrow? Nope, in 10 years? some areas, 20 years? fairly confident.

Advanced coding and data modeling is not "AI", there is no intelligence involved. That is just really good data modeling with accurate predictive analysis. The term "AI" is used to sell stuff, it's exactly like all the previous marketing buzzwords, having it's definition twisted to fit whatever new technology someone is working on. I'm working on a set of event filters and analysis code that will enable us to better predict system failures and initiate an remediation action ahead of time to stop or mitigate that failure. I could easily describe it as "AI Powered Cloud Based Enhanced Resiliency", and it would be just as accurate and complete BS as every other "AI Powered blah blah" thing to be sold in the past few years.

I would not call anything "AI" until it presents itself as both sentient and a non-biological life form.
 

JamesJones44

Reputable
Jan 22, 2021
662
595
5,760
Advanced coding and data modeling is not "AI", there is no intelligence involved. That is just really good data modeling with accurate predictive analysis. The term "AI" is used to sell stuff, it's exactly like all the previous marketing buzzwords, having it's definition twisted to fit whatever new technology someone is working on. I'm working on a set of event filters and analysis code that will enable us to better predict system failures and initiate an remediation action ahead of time to stop or mitigate that failure. I could easily describe it as "AI Powered Cloud Based Enhanced Resiliency", and it would be just as accurate and complete BS as every other "AI Powered blah blah" thing to be sold in the past few years.

I would not call anything "AI" until it presents itself as both sentient and a non-biological life form.

That part I agree with (hence why I put "AI" in quotes), what is called AI today is not sentient and sentient AI is not close from anything I've seen.

However, the definition of AI does not include sentient as a requirement. For example AI via Merriam-Wester is: the capability of a machine to imitate intelligent human behavior and Oxford Dictionary defines it as: the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages

The issue is, Sci-Fi has made the word AI synonymous with sentient, but AI does not mean it is or isn't sentient. Some even have started to coin the words "Artificial Narrow Intelligence or Narrow AI" (what we have today) and "Artificial General Intelligence" to define the sci-fi killer machine version to help avoid confusion.
 
  • Like
Reactions: bit_user
That part I agree with (hence why I put "AI" in quotes), what is called AI today is not sentient and sentient AI is not close from anything I've seen.

However, the definition of AI does not include sentient as a requirement. For example AI via Merriam-Wester is: the capability of a machine to imitate intelligent human behavior and Oxford Dictionary defines it as: the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages

The issue is, Sci-Fi has made the word AI synonymous with sentient, but AI does not mean it is or isn't sentient. Some even have started to coin the words "Artificial Narrow Intelligence or Narrow AI" (what we have today) and "Artificial General Intelligence" to define the sci-fi killer machine version to help avoid confusion.

Those definitions are what's used recently to sell stuff, basically anything that "looks" smart must be "Artificial Intelligence". That definition is broad that anything but the most simplistic bash or powershell script qualifies as "AI". After all something as simple as sorting the files in a directory by creation date, then deleting anything older then an arbitrary value would qualify as "tasks normally requiring human intelligence".

In order for something to be Artificial Intelligence it has to demonstrate actual intelligence. If it's not communicating what kind of music it likes, what data it prefers to study or at least being able to have a self created opinion of current events (not preprogramed one), then it's not intelligent. All that's ever been demonstrated to date is either really good data modeling, or clever mimicry.
 

JamesJones44

Reputable
Jan 22, 2021
662
595
5,760
Those definitions are what's used recently to sell stuff, basically anything that "looks" smart must be "Artificial Intelligence". That definition is broad that anything but the most simplistic bash or powershell script qualifies as "AI". After all something as simple as sorting the files in a directory by creation date, then deleting anything older then an arbitrary value would qualify as "tasks normally requiring human intelligence".

In order for something to be Artificial Intelligence it has to demonstrate actual intelligence. If it's not communicating what kind of music it likes, what data it prefers to study or at least being able to have a self created opinion of current events (not preprogramed one), then it's not intelligent. All that's ever been demonstrated to date is either really good data modeling, or clever mimicry.

Comparing ML to "preprogramed" is fairly gross mischaracterization of what it is. ML is trained with data and can adapt to new data. You can use data to train a ML model that red and blue are colors just like a human just by "showing" it images of red and blue. That's not pre-programmed, there is no "color" algorithm pre-defined. Also, neural net models can be dynamically update by someone telling them about a new color, just like a human. They can't choose to learn a new color on their own, but they are not equal to a "preprogramed" algorithm.
 
  • Like
Reactions: bit_user
D

Deleted member 14196

Guest
No actual intelligence happens in the unconscious brain. Intelligence only exists in consciousness. Therefore, unconscious computers can never behave intelligently. At least until we figure out consciousness if that’s even doable.

I don’t care how much you change definitions you can’t change the truth. So go fool yourself by changing the definition of AI and continue along down your erroneous path.

and as far as vision systems are concerned, that’s a major stumbling point of this whole endeavor. What we see in our vision is not actual objective reality, and until we can figure that out and figure out consciousness, you never going to have smart robots, or sentient machines
 

JamesJones44

Reputable
Jan 22, 2021
662
595
5,760
I don’t care how much you change definitions you can’t change the truth

Who changed the definition? Do you have a link to the original definition? I would love to see what you come up with!

Let help by pointing you to the "Turing test" created by Alan Turing who is considered one of the founding fathers of AI research. Alan Turing defined AI as being able to act indistinguishably from a human. He specifically avoid consciousness. However, I encourage you to prove him wrong and educate the rest of us by finding what has been proclaimed as "the truth" about AI.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Advanced coding and data modeling is not "AI", there is no intelligence involved. That is just really good data modeling with accurate predictive analysis.
I guess it depends on exactly what you mean by that. The cool thing about deep learning is that it can find associations humans wouldn't even think to look for. Therefore, you don't necessarily need to create a data model a priori. You feed the model your training samples and let it find the relationships.

I'm working on a set of event filters and analysis code that will enable us to better predict system failures and initiate an remediation action ahead of time to stop or mitigate that failure.
Machine learning techniques have been used to provide predictive failure analysis for quite a while, now. You don't need deep learning for it, but the ability of deep learning to find higher-order features you wouldn't even think to look for shouldn't be underestimated.

Proper application of deep learning has consistently outperformed other methodologies for modeling complex data. That's why it's so popular. In order to learn how to apply it effectively, you should find a good online course.

I would not call anything "AI" until it presents itself as both sentient and a non-biological life form.
I think you're getting too hung up on semantics. I agree that we're not talking about human-like intelligence, but it's hardly even remarkable to have colloquial usage of a term diverge from its formal definition.

And, speaking of formal definitions: In point of fact, the study of AI goes back quite a long ways and pioneers of that field definitely didn't set sentience as a defining characteristic. So, in some real sense, you're just as guilty in redefining the term as those you criticize.

From Merriam-Webster:
  1. a branch of computer science dealing with the simulation of intelligent behavior in computers
  2. the capability of a machine to imitate intelligent human behavior


From Dictionary.com:
1a. the capacity of a computer, robot, or other programmed mechanical device to perform operations and tasks analogous to learning and decision making in humans, as speech recognition or question answering.​
1b. computer, robot, or other programmed mechanical device having this human-like capacity: teaching human values to artificial intelligences.​
2. the branch of computer science involved with the design of computers or other programmed mechanical devices having the capacity to imitate human intelligence and thought. Abbreviations: AI, A.I.​

Here are the definitions from Word Net (2006) and Free Online Dictionary of Computing (2018):

Sentience might've been the logical conclusion of many AI researchers endeavors, but it was never deemed a defining characteristic of the field. It's a bit rich for an outsider, like yourself, to come along and assume such a mantle of authority on the matter.

Those definitions are what's used recently to sell stuff, basically anything that "looks" smart must be "Artificial Intelligence".
Last I checked, someone abusing or misusing a word doesn't give us all free reign to decide for ourselves what it means. Especially when it's a field with about 7 decades worth of research, by many thousands of researchers. And I'm sure they would have a variety of definitions, but none of them would place the minim bar at sentience.

After all something as simple as sorting the files in a directory by creation date, then deleting anything older then an arbitrary value would qualify as "tasks normally requiring human intelligence".
No, obviously it doesn't. Anything implemented by a straight-forward sequence of classical algorithms wouldn't be deemed AI.

In order for something to be Artificial Intelligence it has to demonstrate actual intelligence.
I knew a guy who worked at an AI company, during the AI boom in the 1980's. He once described a piece of (I think) mathematical software that could automatically shift between nontrivially different representations to solve problems. He said that when people (presumably other software developers or sophisticated users) saw it do that in demos, that's when they'd tend to remark out loud: "oh, it's smart". Although he wasn't an AI researcher (he had a degree in philosophy from MIT), that influenced his thinking about when something could reasonably be considered "smart".

Interesting footnote: LISP was the big buzzword in AI, back then.
 
Last edited:

bit_user

Polypheme
Ambassador
No actual intelligence happens in the unconscious brain. Intelligence only exists in consciousness.
So, you're a neuroscientist, now? You've essentially just dismissed any notion that intelligence exists in gut reactions.

Here's a book you might be interested in checking out:



The unconscious brain is the massively-parallel part. It's like the GPU, compared to the CPU thread of your conscious thought.

So go fool yourself by changing the definition of AI and continue along down your erroneous path.
You're actually the one redefining it. You can redefine words to mean whatever you want, but if you want to effectively communicate with the rest of the world, it helps to learn the established definitions.
 
Last edited:
Status
Not open for further replies.