News Google, Microsoft Fire Next Shots in AI Chatbot Showdown

Your son had an epileptic seizure. Did this happen before?
-First time. We did give him some Absinthe to treat his cough.
You gave your 10-year-old Absinthe? Why?
-We asked ChatGPT how to treat coughs.
Did ChatGPT give you any indication regarding the date of this information?
-No. It did say we could try something called bloodletting too. I will look this up next.
 
Last edited:
  • Like
Reactions: JamesJones44
" lunch ideas based on what's in the fridge" - And that just for several billions? What a bargain!!!
Now if it just could tell how to fill that fridge or with what to cook when one has been affected by a layoff, such as at at MS and Google, huh?
 
  • Like
Reactions: bit_user
For all sides this is definitely over hyped. There are exciting things happening in AI, LaMDA isn't one of them IMO, but... it's generating a lot of buzz, so that part is good.
 
Microsoft, Google, and the other big corporations need to put ChatGPT and their other AI projects back inside their research departments. These things aren't fully baked yet. Not saying they're a bad idea. Just getting annoyed at having to contest automated flags for nudity on hand drawn artwork of fantasy scenes, or have my car try to hang me by the seat belt every time it needs to pass under a bridge while on cruise. Also not looking forward to the future where airlines force you to seek customer service from an AI to fix their own inability to keep to their schedule.

Way too early to be rolling out so many error-prone AIs. Again, not saying it shouldn't be done...we're just not there yet.
 
Microsoft, Google, and the other big corporations need to put ChatGPT and their other AI projects back inside their research departments. These things aren't fully baked yet.
Too late. Commercial pressures will always push these sorts of things out the door before they're ready.

Does anyone remember how long GMail was in Beta? Nearly a decade, I think. And that was just email!

Seeing Microsoft and Google battling it out on a new field is good news, I think.
The haste and recklessness resulting from an A.I. race could plausibly lead to a sort of nuclear accident, like we've long been fearing.

For one thing, I think we probably haven't thought nearly enough about how this sort of tech could be abused by bad actors.
 
Last edited:
Maybe next, they can bring back Zune!
: D
Zune was awesome. Smooth tracpad instead of swirly creative thingy? Play anything from any format? Oled screen in the HD like 10 years before Apple? The only thing it was missing was rounder corners and a walled garden approved by cool sheeple. Well, and way more Apps after the iphone came out, but that was after the era of the classic Zune.
Also why Windows Phone failed, that and Windows phone only ran fake Windows. Even though they could have run the real version with Atom hardware. At least the handhelds are coming out. Maybe in a couple more years we will finally get a real Windows phone.

And Bing has always been better because it is less filtered and shady.

Cortana is a needy nag. There are enough of those in real life.
 
Your son had an epileptic seizure. Did this happen before?
-First time. We did give him some Absinthe to treat his cough.
You gave your 10-year-old Absinthe? Why?
-We asked ChatGPT how to treat coughs.
Did ChatGPT give you any indication regarding the date of this information?
-No. It did say we could try something called bloodletting too. I will look this up next.
Well Microsoft's $10B bought them some lawyers because now they have disclaimers when you login.

It's hard to filter out "bad" data and not push an agenda, viewpoint, bias, etc.

That said, I won't be surprised if we end up with a long list of topics (like medical) that are removed entirely to appease the liability team.
 
It's hard to filter out "bad" data and not push an agenda, viewpoint, bias, etc.
You're not thinking hard enough about what it means to have automated "smarts". I'm pretty sure it's not far off before we see sophisticated, yet general filters for these sorts of things.

People think we're so smart to use carrot emojis instead of the word "vaccine", but a decent linguistic model could pick up on such conventions even quicker than a human could.

Using such technology for pre-filtering could result in a much higher-quality knoledgebase for chat bots, but such smart content filters could also have other implications we'd do well to contemplate.
 
Google's services can't consistently figure out what language I speak despite, you know, explicitly setting my language in the logged-in profile.

Google can't write a YouTube or Google News Algorithm that understands that marking something as "Not Interested" or "Show me less" does not mean "Recommend the exact same thing from the same source, 12 times a day, forever".
So, I don't think Google's competitors need to be worried about Bard.
 
  • Like
Reactions: bigdragon
Google's services can't consistently figure out what language I speak despite, you know, explicitly setting my language in the logged-in profile.
I've never seen an issue with English. I guess you're setting it to something else, then?

Google can't write a YouTube or Google News Algorithm that understands that marking something as "Not Interested" or "Show me less" does not mean "Recommend the exact same thing from the same source, 12 times a day, forever".
You assume they care about what you want, but the reality is they're catering to the ad buyer. And if the ad buyer selected a demographic that you fall into, then Google is going to serve the ad and make bank. Maybe the "Not Interested" applies for less-targeted ads or ads bought according to different metrics.

So, I don't think Google's competitors need to be worried about Bard.
It's a big company, and not all of their teams have the same capabilities, resources, and priorities. Underestimate them at your peril.
 
Last edited:
Google's services can't consistently figure out what language I speak despite, you know, explicitly setting my language in the logged-in profile

I've never seen an issue with English. I guess you're setting it to something else, then?
No, it's definitely set to English, in many different places. I've spent a lot of time trouble shooting this. YouTube in particular is problematic. I believe I get recommended videos in Hindi and Thai because I've clicked "like" on a couple scam baiting videos. I think I get recommended mariachi playlists on a daily basis because I clicked like on despicito one time, like 8 years ago. Also frustrating is that the "don't recommend channel" button is strictly temporary. I've had to block Mr.Beast's annoying face from my feed 2 dozen times over the last year. This kind of anti-customer "bury creators and shove lazy garbage down you throat, while somehow pretending this counts as a Hollywood streaming service" mentality is probably why YouTube is bleeding money.

Google can't write a YouTube or Google News Algorithm that understands that marking something as "Not Interested" or "Show me less" does not mean "Recommend the exact same thing from the same source, 12 times a day, forever".
You assume they care about what you want, but the reality is they're catering to the ad buyer. And if the ad buyer selected a demographic that you fall into, then Google is going to serve the ad and make bank. Maybe the "Not Interested" applies for less-targeted ads or ads bought according to different metrics.

I mean I don't disagree that every story on Google News and every YouTube video are ads, not content.

But Look, I've spent the last 5 years clicking "not interested" on every news story that meetings any US political party or politician by name, yet my "for you" feed is still nothing but tabloid political clickbait. Often stories I've explicitly marked as "not interested" end up in "for you" feed the very next day. My "hide news from this source" list is several hundred items long. If that level of madman OCD obsession can't train the algorithm, then what they're don't is not machine learning. And it's not working.
 
If that level of madman OCD obsession can't train the algorithm, then what they're don't is not machine learning. And it's not working.
All you've demonstrated is that Google doesn't really care what you want or don't want - not that they're incompetent! I mean, it hasn't stopped you using Youtube, has it? That's what it's going to take for them to start caring and actually catering to you, like you seem to expect.

I guess your other option (and what they're clearly hoping) is that you'll buy a Youtube Premium subscription, to avoid the annoying ads. In fact, maybe the introduction or prioritization of the premium service is when/why they stopped caring.