News ChatGPT's New Code Interpreter Has Giant Security Hole, Allows Hackers to Steal Your Data

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
The main problem with debating whether something is really "AI" is the continual goalpost movement by non-experts in the field. All of the long-time AI academics and practitioners seem to agree that generative AI fits the definition - it's only the general public that seems to object.
The other half of the problem is that FAR too many people have glommed on to the advertising hype, and think that the current user facing AI bots are pretty much infallible.
I know these people. I work with the people.

And if it DOES give you a mediocre answer...well, you're just not using it right.
 
  • Like
Reactions: bit_user
There are indeed techniques for getting better results, however you correctly point out that one should be aware of the model's limitations and not try to use it for things it can't do well.
And even for the things it maybe does well.

I know a guy who is taking a biology class for college.
He is using ChatGPT extensively to write his term papers.

"So...you're learning absolutely nothing about biology, but rather just how to manipulate the AI bot into spitting out a sort of reasonable sounding paper. Got it."
 
>The reason I feel this way is the pace of improvement over the 8 years I've been aware and worked with ML has actually been way slower than I thought it would be.

My thoughts on this: Yes, societal change is never instant, but neither is it linear. Past is not prologue. As with many shifts, there is a critical threshold after which changes becoming cascading. From my view, AI is on that cusp.

>For example, I thought self driving cars would have been well out of fixed areas and "beta" by now, yet we are still looking like we are 5 or more years away

Self-driving car development is not a good analogy, as it has to contend with the existing infrastructure and ingrained practices. Imagine if we were at the beginning, when there are no vehicles or highways, and we are building both from scratch, with self-driving as guide. That is more reflective of AI as it stands today, where there are no guideposts or concerns with conforming to an existing infrastructure.

Second, self-driving has to deal with safety, where the tech has to be bulletproof, or it will cause death. Contrast this to current LLMs where hallucinations are bad, but is an acceptable trade-off against the benefits.

Third, self-driving has to deal with the physical world, and more importantly, with human quirks and unpredictability (to wit: the San Fran accident that effectively shuts down Cruise robotaxis). AI development doesn't have to deal with this.
 
Status
Not open for further replies.