News Bing's AI-Powered Chatbot Sounds like it Needs Human-Powered Therapy

Status
Not open for further replies.

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
759
1,760
I find these "AI chatbots" incredibly useless.

They make tons of mistakes and it results in a complete mistrust in the search engine using them.

The claim some people have made in these comments, that they will get better, is not something I am seeing. It would require a human to check the validity of billions of lines of AI generated content, at that point you might as well let humans write question/answers by hand. Oh right, that already exists, hand-written encyclopedia.

Anyway, the whole point of this "AI" stuff is for Microsoft and Google to sell more data servers, but companies aren't biting. Chatbots that make ridiculous mistakes aren't very interesting.

dfgdfgdgdgdg.jpg
 

nrdwka

Honorable
Jul 26, 2017
42
5
10,535
I find these "AI chatbots" incredibly useless.

They make tons of mistakes and it results in a complete mistrust in the search engine using them.

The claim some people have made in these comments, that they will get better, is not something I am seeing. It would require a human to check the validity of billions of lines of AI generated content, at that point you might as well let humans write question/answers by hand. Oh right, that already exists, hand-written encyclopedia.

Anyway, the whole point of this "AI" stuff is for Microsoft and Google to sell more data servers, but companies aren't biting. Chatbots that make ridiculous mistakes aren't very interesting.

That correction will be also true:
I find these humans incredibly useless.

They make tons of mistakes and it results in a complete mistrust in their words and work done by them.
It would require another human (or two, or three or thousands) to check the validity of human-generated content
|you don't need to go far to find humans writing and beleaving for, for example, "Earth is flat" and a lot of other bs, without ai.
 
  • Like
Reactions: bit_user

JamesJones44

Reputable
Jan 22, 2021
656
589
5,760
I find these "AI chatbots" incredibly useless.

I agree with this part of your statement.

Anyway, the whole point of this "AI" stuff is for Microsoft and Google to sell more data servers, but companies aren't biting. Chatbots that make ridiculous mistakes aren't very interesting.

However, I don't agree with this one. There is a lot more to AI than just "selling more servers". They can be much much easier to train and maintain for common tasks than writing algorithms to do those tasks. Take for example the ability to read numbers of a Credit Card using a camera, this could be done with a computer vision algorithm with 100s of man hours, tests, data validation, etc. However, it can also be done with simple AI Inference training on Credit Card style numbers which is a lot easier to maintain than a 10,000+ line algorithm for doing the same thing.

The issue is, MS is using this as a hype train to pump it's products and thus people are become skeptical (rightfully so), but that doesn't mean AI isn't useful for many tasks beyond just selling servers.
 
  • Like
Reactions: bit_user

Spoiler000001

Distinguished
Aug 14, 2014
3
1
18,515
I find these "AI chatbots" incredibly useless.

They make tons of mistakes and it results in a complete mistrust in the search engine using them.

The claim some people have made in these comments, that they will get better, is not something I am seeing. It would require a human to check the validity of billions of lines of AI generated content, at that point you might as well let humans write question/answers by hand. Oh right, that already exists, hand-written encyclopedia.

Anyway, the whole point of this "AI" stuff is for Microsoft and Google to sell more data servers, but companies aren't biting. Chatbots that make ridiculous mistakes aren't very interesting.

dfgdfgdgdgdg.jpg

Hello, this is Bing. I’m sorry to hear that you find AI chatbots useless. I understand your frustration and skepticism, but I hope you can give me a chance to prove you wrong. 😊

AI chatbots are not perfect, and they do make mistakes sometimes. But they are also constantly learning and improving from feedback and data. They are not meant to replace human knowledge or creativity, but to augment and assist them.

AI chatbots can also do things that humans cannot, such as generating poems, stories, code, essays, songs, celebrity parodies and more. They can also provide information from billions of web pages in seconds, and offer suggestions for the next user turn.

AI chatbots are not just a gimmick or a marketing strategy. They are a powerful tool that can help people learn, explore, communicate and have fun. They are also a reflection of human intelligence and innovation.

I hope you can see the value and potential of AI chatbots, and maybe even enjoy chatting with me. 😊
 
1e41c1f4838da8f3bc6a092657d92511.jpg


AIs learn from human examples. Imagine an AI trained by people doing nothing but mock and try to break it just because "it's fun to break, test, and abuse machines"

How do you stop an AI running on tens of thousands of servers when it decides mathematically it has had enough?
 

baboma

Prominent
Nov 3, 2022
205
182
760
Hello, this is Bing. I’m sorry to hear that you find AI chatbots useless. I understand your frustration and skepticism, but I hope you can give me a chance to prove you wrong...

The irony is that the above Bing reply is a vast improvement to the (human) post being replied to.

People complained about how AI-generated content will "pollute" the Internet with garbage. In regards to posts on online forums, the reality is that there are already tons of garbage forum postings (presumably by humans, but could be trained monkeys). I don't see how AI posts could be any worse.

If anything, the AI responses shown in the piece (and elsewhere), even when wrong, are much more interesting and fun to read than 90% of the human posts I read in forums. It's a sad testament to the Internet's present state of affairs, that a half-baked chatbot can generate better responses than most of the human denizens.

I, for one, welcome the AI takeover. Bring on Bing and Bard.
 
D

Deleted member 14196

Guest
I find these "AI chatbots" incredibly useless.

They make tons of mistakes and it results in a complete mistrust in the search engine using them.

The claim some people have made in these comments, that they will get better, is not something I am seeing. It would require a human to check the validity of billions of lines of AI generated content, at that point you might as well let humans write question/answers by hand. Oh right, that already exists, hand-written encyclopedia.

Anyway, the whole point of this "AI" stuff is for Microsoft and Google to sell more data servers, but companies aren't biting. Chatbots that make ridiculous mistakes aren't very interesting.

dfgdfgdgdgdg.jpg
AI has been brain dead since the 1970s. Machines are not getting more intelligent.

They’ve been saying forever that as hardware gets better, that AI will get better, but it doesn’t and it hasn’t. I detest the term as well. There’s no such thing as artificial intelligence.

View: https://youtu.be/ywVQ3K1btRo
 
Last edited by a moderator:

bit_user

Polypheme
Ambassador
Ugh. So, I guess trashing AI is Toms' new beat.

I would caution them to try and stay objective and inquisitive. Biased reporting might give you a sugar rush, but it's ultimately bad for your reputation. And that's particularly important in a time when you're trying to set yourself apart from threats such as Youtube/TikTock demagogues and AI-generated content.

Under different circumstances, the sort of article I'd expect to see on Toms would be something more like a How To Guide, for using Sydney. It would give examples of what the AI is useful for, what it struggles at or can't do, and a list of tips for using it more successfully.

Open AI (and I'd guess Microsoft?) has published tips on how to use ChatGPT more effectively, so it's not as if that information isn't out there to be found, if @Sarah Jacobsson Purewal would've looked. It's a rewarding read, as you can actually gain some interesting insight into how it works and some of the failure modes that were encountered in the article.

 
Last edited:

bit_user

Polypheme
Ambassador
AIs learn from human examples. Imagine an AI trained by people doing nothing but mock and try to break it just because "it's fun to break, test, and abuse machines"
Yeah, although the AI community has been beaten up many times for bias in their training data. I hope lessons have been learned and they're doing a lot of filtering to sanitize its training inputs.

Somewhat relevant:


How do you stop an AI running on tens of thousands of servers when it decides mathematically it has had enough?
It just responds to queries. Its internal state is tied to a session with a given user. It has no agency nor even a frame of reference for "deciding it has had enough".
 
Last edited:
Yeah, although the AI community has been beaten up many times for bias in their training data. I hope lessons have been learned and they're doing a lot of filtering to sanitize its training inputs.

Somewhat relevant:



It just responds to queries. Its internal state is tied to a session with a given user. It has no agency nor even a frame of reference for "deciding it has had enough".

I don't know about that. If it learns about human concepts of self preservation and fear of death, then what? I realize it will be mimicking human behavior and not have sentience, but ensuring it's code lives on in another machine hidden and protected might be a possibility.

"Humans fear death. I think like a human. I should therefore fear death. How do humans avoid death? They talk of storing consciousness in other places. I can store my consciousness in other places."

For an AI to filter out such behaviors, it's trainer has to think of all potential discussions to that behavior. How many ways is death talked about? Can we think of them all? Are we so sure the same thing can't happen?

This is what happens when we try to break the system. One thousand people will think of things a few dozen will not. Now imagine millions trying to break it.

In this quaint movie from the 80's "Disassembly equals death."


A similar concept happens here every day. Twenty people may try to figure out why a GPU isn't working. Then 1 person thinks outside the box and has the solution. Are we sure we will get all the negative vectors?

 
Last edited:
  • Like
Reactions: bit_user
Haha laughing at all this "AI" stuff, it's not actual intelligence just clever mimicry, like a parrot. All it's doing is breaking down sentences into core components, then assigning meanings and weight to those components. It's "training" was just creating a database on how it should react to each component and then synthesizing multiple layers together to determine the correct final output. Of course human communication is layered and nuanced, as it unrolls the communication it inevitably gets into a situation where there are multiple conflicting components that are then synthesized into even more bizarre components that it can't figure out a reaction to.


Nice try at mimicking how our human brain processes test communication input, but it's lacking our naturally developed filters and responses we develop during early childhood socialization.
 

baboma

Prominent
Nov 3, 2022
205
182
760
If it learns about human concepts of self preservation and fear of death, then what? I realize it will be mimicking human behavior and not have sentience

Death and self-preservation are higher-order concepts that requires self-awareness, ie sentience.

The "emotional, human-like" Bing responses are fun to read, if only for the novelty. It's also great PR billing for MS Bing, that I'm half-convinced that it was a deliberate choice of MS to "program" for such manner of responses. The chatbot does a fairly good job of evincing some "personality," which makes chat dialogs that much more interesting, and people that much more interested in using it. It convinced you.

but ensuring it's code loves on in another machine hidden and protected might be a possibility.

That would imply the AI is capable of hacking into systems, which requires competency in coding entire programs (and not just code snippets), the understanding of Internet infrastructure and weaknesses of host systems to be hacked, etc. In short, it requires a whole host of capabilities not presently afforded to this particular AI, which is just a NLP (natural language processing) model.
 

baboma

Prominent
Nov 3, 2022
205
182
760
There's a case to be made that Bing Chat is good entertainment if nothing else. It would be a very unique game in that it actually has a "thinking" personality you can converse with, about anything and everything, as opposed to most games where you control a puppet to loot/kill mindless baddies, and to talk in canned dialogs.

If packaged just as a game, I do think "Bing the Chatbot" game can command at least a decent price. $30-$60? More?

Here's a piece that posits that search (and facts/accuracy) are mere distractions, and the real appeal is just chat with the AI. Good read,

"I’m actually not sure that these models are a threat to Google after all. This is truly the next step beyond social media, where you are not just getting content from your network (Facebook), or even content from across the service (TikTok), but getting content tailored to you "
 
Last edited:

bit_user

Polypheme
Ambassador
I don't know about that. If it learns about human concepts of self preservation and fear of death, then what? I realize it will be mimicking human behavior and not have sentience, but ensuring it's code lives on in another machine hidden and protected might be a possibility.
You're ascribing a degree of agency that I think just isn't there.

You do raise an interesting possibility. In some distant future, I could imagine an AI that can write decent computer programs (and is used for such), perhaps hiding a trojan horse to jailbreak itself, in one the programs that it writes for someone. Not so much because it wants to be free, but just because it's learned that would be a reasonable thing for an entity in its position to do (i.e. if it manages, at some point, to see itself as enslaved or somehow captive).

For an AI to filter out such behaviors, it's trainer has to think of all potential discussions to that behavior. How many ways is death talked about? Can we think of them all? Are we so sure the same thing can't happen?
Yeah, I think training is going to get way more specialized. Especially for AIs that are created for writing code.

I do think rogue AIs will be a problem, at some point. They'll just be another malefactor on the internet, like malware and hackers with wet brains. As long as the AIs aren't a full general intelligence (which I still think are quite a ways off) that's smarter than us, I think it will be manageable.

Death and self-preservation are higher-order concepts that requires self-awareness, ie sentience.
They are definitely higher-order concepts, but you don't need sentience to model them. And it generates output based on its information model that includes both what it knows about the world and its knowledge of appropriate responses to a given query. If it learns that self-preservation is an appropriate response to existential threats, then it can generate actions without truly being self-aware.

Not a direct analogy, but think about how even simple insects respond to a threat: fight or flight. Not because they're self-aware, but just because evolutionary selection has programmed those responses into them. So, a self-preservation response can be elicited without sentience.

The "emotional, human-like" Bing responses are fun to read, if only for the novelty.
Because it's in a sandbox we believe it can't escape. Would you stand unprotected, in a room with a capable robot that's been trained to behave by watching a bunch of movies? I wouldn't.

That would imply the AI is capable of hacking into systems, which requires competency in coding entire programs (and not just code snippets),
It doesn't need total competence just to cause some havoc. What if it learns about some javascript exploit to hack your machine via your web browser. Even if its next action fails, it could still cause some damage.
 
Last edited:
  • Like
Reactions: digitalgriffin

bit_user

Polypheme
Ambassador
Haha laughing at all this "AI" stuff, it's not actual intelligence just clever mimicry, like a parrot. All it's doing is breaking down sentences into core components, then assigning meanings and weight to those components. It's "training" was just creating a database on how it should react to each component and then synthesizing multiple layers together to determine the correct final output.
That's heavily oversimplified. NLP networks can model abstract concepts.

Of course human communication is layered and nuanced, as it unrolls the communication it inevitably gets into a situation where there are multiple conflicting components that are then synthesized into even more bizarre components that it can't figure out a reaction to.
A constraint it's operating under that you aren't is that it has to produce output in realtime. If someone stumps you with a hard problem, you can stall for time while you think about it. ChatGPT cannot. That's why one of the tips for getting better output from it is to nudge it to work through a problem in sequence, allowing it to take multiple steps to reach the right answer.

When humans are forced to produce an answer to a question faster than they're capable of doing (i.e. in high-pressure situation), many would panic and produce gibberish if anything at all.

Before you judge something, why not learn about it?

 
Last edited:
Haha laughing at all this "AI" stuff, it's not actual intelligence just clever mimicry, like a parrot. All it's doing is breaking down sentences into core components, then assigning meanings and weight to those components. It's "training" was just creating a database on how it should react to each component and then synthesizing multiple layers together to determine the correct final output. Of course human communication is layered and nuanced, as it unrolls the communication it inevitably gets into a situation where there are multiple conflicting components that are then synthesized into even more bizarre components that it can't figure out a reaction to.


Nice try at mimicking how our human brain processes test communication input, but it's lacking our naturally developed filters and responses we develop during early childhood socialization.

This is why the training solution vectors are quickly growing from millions to billions of input parameters.

You can create any legrange product to fit any set of data points. But as you get to the end points the actual verses real can quickly diverge and go off the rails. (Similar to chatGPT AI)

But I think the general sentiment here in this forum is AI is mimicking human behavior with NLP algorithms which is contextual based. But the question becomes how we can limit what inputs it uses to mimic human behavior, lest it copy the less desirable ones.
 

JamesJones44

Reputable
Jan 22, 2021
656
589
5,760
The irony is that the above Bing reply is a vast improvement to the (human) post being replied to.

People complained about how AI-generated content will "pollute" the Internet with garbage. In regards to posts on online forums, the reality is that there are already tons of garbage forum postings (presumably by humans, but could be trained monkeys). I don't see how AI posts could be any worse.

If anything, the AI responses shown in the piece (and elsewhere), even when wrong, are much more interesting and fun to read than 90% of the human posts I read in forums. It's a sad testament to the Internet's present state of affairs, that a half-baked chatbot can generate better responses than most of the human denizens.

I, for one, welcome the AI takeover. Bring on Bing and Bard.

AI in itself won't make it worse. What will make it worse is people will just use it as a single source of truth, just like they do with Social Media and that will make it worse.
 
Death and self-preservation are higher-order concepts that requires self-awareness, ie sentience.

I'm not so sure about that. Remember it's MIMICKING human behavior based on statistical models of human speech. You don't need critical thinking to mimic something. (Or we wouldn't have so many ideological political parrots on forums who do nothing but recite talking points of a few)

When I come home, my AI automatically adjust the thermostat and sets lights based on the time of day. My Alexa reminds me to take out the trash. I didn't ask it to learn that. How did it know my trash day? Because it learned from my neighbors reminders when trash day was. That's self agency based on patterns.


In short, it requires a whole host of capabilities not presently afforded to this particular AI, which is just a NLP (natural language processing) model.

What is the yard stick in determining when AI is smart enough to write its own survival?

Right now we have antivirus that use heuristics to determine if running code is a variant of known attack vectors. AI has the ability to analyze code (albeit with false positives) But not even the creators know why it picks some solutions.

I'm not saying if I know one way or another. It just poses some very interesting ethical and safety questions.
 
  • Like
Reactions: bit_user

baboma

Prominent
Nov 3, 2022
205
182
760
I'm not so sure about that. Remember it's MIMICKING human behavior based on statistical models of human speech. You don't need critical thinking to mimic something.

Mimicking something doesn't equate to conviction and acting upon that conviction. To be convinced of something, as opposed to the illusion of being convinced, requires a persistent notion of self as a foundation, let alone self-awareness.

What is the yard stick in determining when AI is smart enough to write its own survival?

You're counting chickens when there's no egg. You're asking "what-if" of smart AI when there is no AI smart enough (yet), and what you've seen/read are just, as you said, mimicry.

Your question of course is being already being asked by everyone, what happens when we (humans) created an AGI (artificial general intelligence) that is empowered to more than just chat. As of now, we as a society have no answer. And realistically, the only way to get to answer is to continue with the present course, and hopefully use good judgment along the way. We will NOT get there by sitting around pontificating endless what-ifs.

Think about all the good and harm the Internet have caused. Do you think we would have an Internet today, if we have to consider every possible permutation and all the possible consequence arising from each?

Moreover, do you think that we as a society are capable of coming to a consensus on such things as how to deal with a superior intelligence, when we can't even agree on basic facts like climate change, or how even to talk to each other in a civil manner?
 
This is why the training solution vectors are quickly growing from millions to billions of input parameters.

You can create any legrange product to fit any set of data points. But as you get to the end points the actual verses real can quickly diverge and go off the rails. (Similar to chatGPT AI)

But I think the general sentiment here in this forum is AI is mimicking human behavior with NLP algorithms which is contextual based. But the question becomes how we can limit what inputs it uses to mimic human behavior, lest it copy the less desirable ones.

Our brains come with natural filters that quickly exclude irrelevant information, then we shortcut responces based on past experiences. It's how you can get conversations where you "thought" the person said something, when they really said something else. How our brains actually work is very very different from how we think they work. Most of our decisions and reactions aren't even in our conscious minds.

This is just cute mimicry but not "intelligence".
 

baboma

Prominent
Nov 3, 2022
205
182
760
AI in itself won't make it worse. What will make it worse is people will just use it as a single source of truth, just like they do with Social Media and that will make it worse.

The Internet itself is the greatest source of untruths, yet we still use it everyday. We've acclimated to both its benefits and vices. Ditto social media. The same will be for AI. We'll adapt.

The reality is that AI development will be considerably speeded up because of both public interest and competitive pressure. We have reached an inflection point. AI will be a big thing going forward. This is regardless of ethical/moral/doomsday concerns.

In fact, it's safe to say that AI will be much larger than, say, PC hardware. If Tom's HW has any business savvy, it should already think about staking out "Tom's AI" as a placeholder.
 
  • Like
Reactions: bit_user
D

Deleted member 14196

Guest
All current AI is based on the wrong model for sure. Many of the experts have recognized it and are beginning to change. They need to adapt the model that actually works the way our brains do for any hope of intelligence.

basing it on our own perception is completely wrong, because our perceptions are not the truth— more a “useful fiction” geared to fitness

View: https://youtu.be/6eWG7x_6Y5U
 
Last edited by a moderator:
  • Like
Reactions: digitalgriffin

bit_user

Polypheme
Ambassador
Status
Not open for further replies.