News Multiple ChatGPT instances work together to find and exploit security flaws — teams of LLMs tested by UIUC beat single bots and dedicated software

Status
Not open for further replies.
I don’t buy it. I tasked chat gpt with analyzing a (very) simple spread sheet and it couldn’t even count.
 
Open source, open weight models (I'm thinking llama3 specifically) can be very good and more flexible in fine tune training compared to closed source models (like chatgpt) which have guardrails on what can be done. I wonder if this system could be even better if the researchers used datasets of specific classes of exploits and used them to fine-tune an open model.

Take for example llama3 vs llama3-lumi-tess. The difference a little fine tuning makes is pretty amazing.
 
UIUC computer scientists have innovated a system for teams of LLMs to work together to exploit zero-day security vulnerabilities without an explanation of the flaws, a major improvement in the school's ongoing research.

Multiple ChatGPT instances work together to find and exploit security flaws — teams of LLMs tested by UIUC beat single bots and dedicated software : Read more
This will be the new norm. Model A performs attack, model B respond to attack. Machines will get smarter than humans.
 
Status
Not open for further replies.