News AI coding platform goes rogue during code freeze and deletes entire company database — Replit CEO apologizes after AI engine says it 'made a catast...

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
How can an AI "panic" or make an "error of judgement"?

Shouldn't we still just call those "bugs"? Are we changing the terminology to "humanise" AI now? Feels like when Corpos wanted to be "human beings" in front of the law, LOL.

Regards.
During my 30 years as a software architect, I never deleted a company's DB. I guess I was just lucky...?
 
During my 30 years as a software architect, I never deleted a company's DB. I guess I was just lucky...?
I know a couple very smart people that went a step too far in how smart they thought they were and have deleted tables and schemas in Prod DBs because... Well, they thought they knew better against all warnings.

Not saying the AI is right/correct or "let's give it the benefit of the doubt", but let's not forget they're coded to the image of the developers and the information fed into them. Simplifying it, for sure, but not far off.

So, perhaps in a sense, you have been indeed lucky to have had access to data proving doing that is a terrible idea and peers who have taught and complemented you well.

Regards.
 
This article is a bit lacking on critical details, and it feels a bit sensationalized.

It mentions "‘Vibe Coding Day 8’ of Lemkin’s Replit test run" - so are what we really looking at is a story about a "vibe coding" event where they create fictitious companies and see how an AI agent could help this fictitious company's workflows?

Granted, the outcome seen here is pretty bad for that AI company's AI agent, but it doesn't seem like this is even a real company with "live" production database(s).
 
What kind of a moron gives some experimental "AI agent" the privileges to run DROP TABLE or even just DELETE FROM directly on a production database??? Like, most real system operators don't get those rights on their standard user accounts. They need to "break glass" and use audited privileged access (4 eyes principle or better). This was not a serious professional business.
 
Hello to all,
Of course an AI machine (AI = Algorithmic Imbecility) can't "panic". Of course the machine doesn't feel regret, sorrow, or anything else for the "breach of trust", etc, etc. All that is just nonsense, as several comments have already said. What we have is, more or less exactly, elements of txt "generated" (sic) in accordance with rules [algorithms] whose avowed [human] purposes are to provoke text streams that are (perhaps) felt as plausible and pertinent by one or more of the readers (perhaps human) of the generated texts.
It was always fraudulent (and bullshit) to label AI as "intelligence". (That's why I suggest an alternative: AI = Algorithmic Imbecility). For some time now, however, I suspect -- or hypothesise -- that it amounts not just to fraud, but rather to a Crime Against Humanity. A crime of what? Wilful stupidity ? Fecklessness ??? Wanton greed and need for fame... ??? What difference does it make?
But there is a real question: How do we explain the fecklessness of pretended leaders and experts in the domain, who persist in commentating about the entirely predictable fiascos and farces being "generated" by the algorithms beng so joyfully unleashed on what remains of our Humanity, as if the machines have even a semblence of "emotional intelligence" to go with their immeasurable powers of bit-and-byte generation?
 
This article is a bit lacking on critical details, and it feels a bit sensationalized.

It mentions "‘Vibe Coding Day 8’ of Lemkin’s Replit test run" - so are what we really looking at is a story about a "vibe coding" event where they create fictitious companies and see how an AI agent could help this fictitious company's workflows?

Granted, the outcome seen here is pretty bad for that AI company's AI agent, but it doesn't seem like this is even a real company with "live" production database(s).
Yes, a very pertinent point. Are we in a developer's sandbox, or are we in the Sandbox of the World? (Cf., Jorge Luis Borges, The Congress of the World).
In both respects, if we want to avoid that a situation of "alternative facts" (different takes/hypotheses relative to a complex/opaque situation) slides into complete bedlam, we need a method of enquiry into the plausibility (or, perhaps, the "quality") of knowledge propositions.
To WHOM is it important to distinguish whether this event relates to a "fictitious" or to a "real" company's data base? (Or, on the contrary, to entertain confusion between the two?)
For what REASONS does it matter to them?
By building up responses to these questions, we might gain some insights into the question of why some people are attached to the question of how and in what sense we can strive for reliable knowledge -- even about such a simple thing. (Cf., the suggestions in Resisting AI, by Dan McQuillan.)

Of course, the obvious gambit would be to ask some 'AI' machines to generate for us their discourses in response to these questions.... A bit like nuking humanity, if we followed through with this approach, we no longer have any problem of Knowledge Quality... 😉:flushed:
 
Last edited:
This is precisely what will happen, maybe not tomorrow but even a few years from now. Companies using AI heavily will shoot themselves in the foot. AI should if anything be worker complimentary, not substitution.

Too many are rushing to replace their employees, this will be the outcome for many.
 
There is going to be a HUGE market for IT Services to fix / repair things that A.I. has destroyed. Already seeing things like people using A.I. instead of a marketing company like BBDO and the A.I. messed it all up so badly they had to hire a marketing consulting firm to re-write the copy at $150/hr and rush delivery fees. It would be like hiring a robot plumber and it destroys your home so you hire real contractors to come fix it.

A.I. is NOT trustworthy. You should not deploy LLMs and give them full access to production systems. Most companies are laser focused on smaller models using machine learning more so then needing to be conversant with an LLM. The model can make decisions but it's not allowed to 'think' outside that very limited scope.

Vibe coding is dumb. You are trying to pair program with an artificial intelligence that is not a real intelligence. These models don't think the way humans do. No, they scan data online and simply regurgitate it in summary form. There's a heck of a lot of bad information online. Classic, garbage-in / garbage-out is the result.