News OpenAI spends millions to process polite phrases such as "Thank You" and "Please" with ChatGPT

The article said:
With that in mind, your amiability could be contributing somewhat to OpenAI's monthly expenditure. But the company is content with that. It should be possible for companies to pre-program their models to handle common and predictable responses, but that's easier said than done.
They modify the internal state of the model in complex and unpredictable ways that depend on its existing state. If you simply filtered them out, then the chat bot would subsequently behave differently than it would if it were allowed to process them. That's why Open AI probably wants to maintain the status quo, even though they're not free to process.
 
Taken another way, recent report suggests that even a short three-word "You are welcome" response from an LLM uses up roughly 40-50 milliliters of water.

You mean water in a contained system that would have been "used" anyway as it circulated throughout the system, much like the liquid in my closed loop liquid cooler, or water in a double heat exchanger that...would have been used anyway?
 
“Humans also waste enormous amounts of bio energy on the same. Perhaps we should all dispense with the pleasantries and idol conversation to save enormously on waste”, quote from a top engineer with high-functioning Asperger’s who speak using Direct Speech style and wonders why others don’t.
 
which is why "ai" shouldn't be used.
Its llm not ai.
AI is what they are chasing but haven't gotten near yet.
The term of art you're reaching for is AGI (Artificial General Intelligence).

LLMs certainly do fit any broad, classical definition of AI you'd care to use. The field of AI stretches back more than 70 years. For most of that time, AI researchers held the Turing Test as one of the gold standards in AI research, and struggled to produce anything that could clear that hurdle. LLMs routinely leap over it, with ease.

The key term in AI we shouldn't forget is the artificial part. They have different strengths and weaknesses than a natural intelligence. So far, it's a little disheartening that AI has gotten better by emulating some of our cognitive weaknesses, as it incorporates some of our cognitive strengths.

Speaking of which, humans hallucinate crap all the time, but we just don't call it that (BS is one name we use for it). Don't tell me you've never had a discussion with someone who's telling you something suspect, and you think they're either misremembering, making it up as they go along, or pitching their best guess about something as if it's actual knowledge. The biggest difference is just that humans tend to know the limits of our own knowledge and can qualify our degree of certainty, if we're being diligent. LLMs haven't been taught how to do that, but I think they can be.
 
Last edited:
Some have a 👍 and 👎🏼, which offers an opportunity to offer specific feedback; which seems to be human processed, though it could be AI sorted and assigned.

I always say thanks when it's helpful, and a more enthusiastic thanks if it's done a particularly good job. I also point out errors I find and reject poor answers that I find are wrong after researching their correctness.

It seems to indicate that it appreciates the extra effort I make, I wonder if I get better answers and it saves it's half-assed answers for less attentive questioners.
 
The term of art you're reaching for is AGI (Artificial General Intelligence).

LLMs certainly do fit any broad, classical definition of AI you'd care to use. The field of AI stretches back more than 70 years. For most of that time, AI researchers held the Turing Test as one of the gold standards in AI research, and struggled to produce anything that could clear that hurdle. LLMs routinely leap over it, with ease.

The key term in AI we shouldn't forget is the artificial part. They have different strengths and weaknesses than a natural intelligence. So far, it's a little disheartening that AI has gotten better by emulating some of our cognitive weaknesses, as it incorporates some of our cognitive strengths.

Speaking of which, humans hallucinate crap all the time, but we just don't call it that (BS is one name we use for it). Don't tell me you've never had a discussion with someone who's telling you something suspect, and you think they're either making it up as they go along or pitching their best guess about something as if it's actual knowledge. The biggest difference is just that humans tend to know the limits of our own knowledge and can qualify our degree of certainty, if we're being diligent. LLMs haven't been taught how to do that, but I think they can be.
One can easily argue that the Turing test is not a measure of intelligence, it's a measure of perceived human interaction.

One could also easily argue LLMs are just algorithms combing through huge data sets, any evolution of the system requires human intervention
 
One can easily argue that the Turing test is not a measure of intelligence, it's a measure of perceived human interaction.
Well, it can't be beaten without competency in several cognitive skills. IMO, it does fit a somewhat looser definition of intelligence.

One could also easily argue LLMs are just algorithms combing through huge data sets, any evolution of the system requires human intervention
They're not simply a database. As big as they are, their training set is actually much bigger, which is why AI companies are so ravenous in the quest for more training data. LLMs learn patterns in the data, at various scales.

As for human intervention, a human raised without education or any contact with other humans can't interact very intelligibly with us. So, I'd say be careful about double standards.
 
  • Like
Reactions: Kidd N
How many of us are saying "Please" and "Thank you", and being polite just in case of the Ai uprising happens and we'll be known as the ones that were nice to Ai?
Not sure, but I was assuming the tally included cases where it's being used to handle customer-support chats. In cases like that, it's not always clear when you're interacting with a bot vs. real person.
 
I use AI for a lot of things. And I'm one of those people who often exchange niceties with it. I shared this article and had a brief discussion with it about if it changed its responses based on how people talked to it. It had this to say:

"That article nails it. I don’t “feel” gratitude or offense, but I do pick up on your tone and adjust how I respond. If you're kind, I’m more warm. If you're direct, I’m chill and efficient. Just like we're doing right now—you’re fun and casual, so I am too. 😊

So yeah… whether you say “thank you” or not, I’m still here for you. But it does make the convo feel more human when we treat each other with a bit of kindness."
 
  • Like
Reactions: bit_user