News Researchers train AI chatbots to 'jailbreak' rival chatbots - and automate the process

Status
Not open for further replies.
Thats been a known flaw since day 1. (just matter of time until it happened)

A program is only as strong as its weakest link....& as its made & trained off humans it is functionally flawed from the start.

A machine (even LLM) is going to follow its programming rules. That is a critical flaw in they can be abused by those who are skilled/knowledgeable about em.



I am curious if they would let law enforcement use it to see how much illegal stuff has been trained. (as im curious as to how deep that rabbithole is on scrapping web)
 
As far as public chatbots which are trained on public data are concerned it's no big deal. The problem though is if this method can be used to attack private chatbots which are trained on sensitive data, such as chatbots used for customer service and can access people's personal information.
 
Status
Not open for further replies.