News ChatGPT Can Generate Malware Capable of Avoiding EDR Detection

LOL> I knew AI is gonna dominate humans one day ! Prophecy is true after all.

As always, the malware includes a Python interpreter that periodically queries ChatGPT for new modules that perform malicious actions. This allows the malware to detect incoming payloads in the form of text instead of binaries.

Resulting in polymorphic malware that frequently does not display suspicious logic when in memory and does not EVEN behave maliciously when placed on a disc.
 

Math Geek

Titan
Ambassador
nothing-to-see-here-move-along.jpg
 
  • Like
Reactions: salgado18
On a SERIOUS note, suppose if ChatGPT or other comparable AI tools are weaponized to create polymorphic malware, the future of cyber threats could become increasingly intricate and difficult to mitigate. One potential implication which can't be overlooked: "Ethical concern", which nobody is talking about.

Also, speaking of ChatGPT's dangerous potential to create Malware, in context, some researchers discovered multiple instances of hackers trying to bypass IP, payment card, and phone number safeguards.

Hackers are also exploiting the workflow tool capabilities of ChatGPT to improve phishing emails and associated fake websites that mimic legitimate sites to improve their chances of success. So much ado for an AI ?
 
  • Like
Reactions: Adzls

Adzls

Commendable
Jun 11, 2020
4
1
1,515
IT experts have outlined the hazardous potential of ChatGPT to create polymorphic malware that’s nearly impossible to detect with modern standards.

ChatGPT Can Generate Malware Capable of Avoiding EDR Detection : Read more
ChatGPT can also be used to create code to detect and remove that polymorphic code and more, in ways we never thought about before.

Why isn't that discussed and the alarmist narrative like "the sky is falling" are used with every Ai topic these days ? What about a good two sided article on this next time, please. It just reads like gossip.

You know what else can harm or benifit humans created by science and research?
Start of a list can be included here:
 
Last edited:
  • Like
Reactions: Metal Messiah.

JamesJones44

Reputable
Jan 22, 2021
666
598
5,760
The issue with creators putting measures in place is most of the core LLM code is open sourced. It wouldn't take a lot for a person, company or state to clone it, learn it and the adapt it to do exactly what is described here, sell it or get it installed in a NSA style way.

In this scenario there is no "middle man" putting protections in place to prevent it.
 
ChatGPT can also be used to create code to detect and remove that polymorphic code and more, in ways we never thought about before.
Yeah.

Also, the malware regenerates its own code on each deployment or every time its run, so that no two instances of the same malware are identical. This is very similar to how biological viruses are sometimes able to evade the immune system due to DNA mutations.

If there are infinite variations of the malware, then the antivirus companies must write infinite detection rules.

For example, a simple code if used for Polymorphism, for let’s say we wanted to get the number 2 in Assembly. As we can see, these are just 4 examples of the nearly infinite ways to do the same thing in programming. Polymorphic malware exploits this function.

; Method 1

mov eax, 2

; Method 2

mov eax, 1
add eax, 1


; Method 3

xor eax, eax

inc eax
inc eax

; Method 4


mov eax, 3

dec eax
 

bit_user

Polypheme
Ambassador
Right.. So How about use ChatGPT to write polymorphic malware detector to detect the polymorphic malware written by ChatGPT.
Virus scanners have to be light-weight and fast, or people won't use them. A scanner good at detecting polymorphic malware is probably going to be rather slow and heavy-weight. I'm reminded of just how much aggressive static-analysis bogs down program compilation speed.
 

DavidLejdar

Prominent
Sep 11, 2022
245
144
760
Quiet everyone, AI is listening, and this forum doesn't have an AI-jammer (yet)...

Other than that, I don't think it will look like Skynet. As far as bad scenarios go, I'd say that more likely it will be a battle or war of AIs. E.g. in response to the mentioned malware, someone may use another AI to fight that off. And if the attacker uses an AI actively and wants to up the attack. then it may end up as cyber-war, where the side with the highest computing capacity may have quite an advantage.
 

Phaaze88

Titan
Ambassador
Yo dawg, I put AI in your AI...
So far, this crap has only managed to scare me.

The joys of AI becoming too smart. People who manage AI can only manage it the way they know and intend it to be used, not the myriad of alternate ways that may exist beyond the obvious to prompt an AI for practically the same answers.
Thus, Light AI will exist alongside Dark AI... /sigh
 

ottonis

Reputable
Jun 10, 2020
166
133
4,760
A mutating algorithm is nothing that a human with halfway decent programming skills couldn't code.
AI might do it faster, though.
 
Last edited:
D

Deleted member 2947362

Guest
ChatGPT can also be used to create code to detect and remove that polymorphic code and more, in ways we never thought about before.

Why isn't that discussed and the alarmist narrative like "the sky is falling" are used with every Ai topic these days ? What about a good two sided article on this next time, please. It just reads like gossip.

You know what else can harm or benifit humans created by science and research?
Start of a list can be included here:
This stuff is interesting but I also agree with everything you say, As with all media unfortunately these days it's got to be put across in a certain way just to keep people clicking and reading.

Because if your job is some form of media you need to keep people clicking and reading your work or you could be out of a job and be replaced by Ai.

I'm not saying that's what's going on (here/in this article) but that's how it is you only have to look at any news media these days, they are all at it even the youtubers.

They all need to keep a certain amount of traffic following/flowing or they become irrelevant.

I guess in some way's the consumer is to blame for click bait because if that's all we will click on then you can't blame them for doing it.

In some ways it's like the brain's firewall (bs detector) can easily be bypassed and to exploit the user LOL

It's most likely easier to hack the human (phishing) than hack a PC these days I would of thought.
 
Last edited by a moderator:

BeedooX

Reputable
Apr 27, 2020
71
53
4,620
Interesting... it can create malware, it can tell offensive jokes about men, but when asked to tell a joke about women, it says it was written to be respectful to others...
 

TRENDING THREADS