News Multiple ChatGPT instances work together to find and exploit security flaws — teams of LLMs tested by UIUC beat single bots and dedicated software

Jun 10, 2024
1
0
10
I don’t buy it. I tasked chat gpt with analyzing a (very) simple spread sheet and it couldn’t even count.
 

jp7189

Distinguished
Feb 21, 2012
461
275
19,060
Open source, open weight models (I'm thinking llama3 specifically) can be very good and more flexible in fine tune training compared to closed source models (like chatgpt) which have guardrails on what can be done. I wonder if this system could be even better if the researchers used datasets of specific classes of exploits and used them to fine-tune an open model.

Take for example llama3 vs llama3-lumi-tess. The difference a little fine tuning makes is pretty amazing.
 
Jun 11, 2024
1
0
10
UIUC computer scientists have innovated a system for teams of LLMs to work together to exploit zero-day security vulnerabilities without an explanation of the flaws, a major improvement in the school's ongoing research.

Multiple ChatGPT instances work together to find and exploit security flaws — teams of LLMs tested by UIUC beat single bots and dedicated software : Read more
This will be the new norm. Model A performs attack, model B respond to attack. Machines will get smarter than humans.
 
May 12, 2023
27
6
535
The UIUC system leveraging teams of LLMs to identify zero-day vulnerabilities gives us an idea of how AI can revolutionize cybersecurity. While LLMs can struggle with simple tasks, their combined capabilities in this context seem promising. By the way, speaking of Chat GPT for sheets, the extension I'm using performs quite well with tables, charts, lists, and data analysis, no hiccups. I agree with the point about the technology outpacing security measures - it’s a double-edged sword. However, using open-source models like LLaMA3 could further enhance this system’s potential. This innovation could indeed become the new standard in cybersecurity, where AI not only finds but also defends against exploits.
 
Last edited: