News ChatGPT Can Generate Malware Capable of Avoiding EDR Detection

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Titan
Ambassador
That guy is an idiot. There have been how many AI villain movies and sci-fi books starting in the 1950’s that describe the theoretical downsides of humans creating AI, and this guy ignores all that, helps google create AI, then worries it can be used to kill humans….what a hypocrite.
The difference is the timescale. If you listen to what he says, he thought we'd still have a few decades to figure out how to mitigate the threats. That's what changed.

Back in 2005, Ray Kurzweil was off in the other direction:
"$1,000 will buy computer power equal to a single brain 'by around 2020' while by 2045, the onset of the singularity, he says the same amount of money will buy one billion times more power than all human brains combined today."​

Kurzweil made the mistake of failing to account for the breakdown of Dennard scaling, which happened right around that time.
 
Last edited:
  • Like
Reactions: JamesJones44
D

Deleted member 2947362

Guest
Like with any race, when there is a race on bad decisions and mistakes will be made and some times with fatal consequences.

As for the size and scale of any consequences of Ai, I guess we will find out, because nothing is going to control it.

Only those who agree to stick to the rules/laws, others will simply just carry on testing and advance it for what that want it to do.

And so it has already begun :homer: lol
 
Last edited by a moderator:

Phaaze88

Titan
Ambassador
...and you thought crypto mining chewed up a lot of compute resources and energy. Just wait.
And that's where the Matrix comes in - if you know why humans were in those pods in the real world... ah, but that's like far into the future, after the Terminator events, since they are apparently part of the same universe.

Screw this stupid race.
 
D

Deleted member 2947362

Guest
And that's where the Matrix comes in - if you know why humans were in those pods in the real world... ah, but that's like far into the future, after the Terminator events, since they are apparently part of the same universe.

Screw this stupid race.
Are a select few selling humanity's soul in an exchange for the digital devil lol
 
Last edited by a moderator:

waltc3

Honorable
Aug 4, 2019
454
252
11,060
That will be the Achilles heel of Chat GPT--the security side of the fence. People are really stupid and think AI can do things it wasn't programmed to do...;) The whole "AI can destroy the world" is pure bunk. These people think that AI "thinks" about what it is doing..;) Computers do not think!...They are trying to hype "AI"--to make people think it's something it isn't--because stupid, scared people are easy to manipulate. Be smarter than they are, and resist this marketing garbage.
 
Last edited:

bit_user

Titan
Ambassador
That will be the Achilles heel of Chat GPT--the security side of the fence. People are really stupid and think AI can do things it wasn't programmed to do...;) The whole "AI can destroy the world" is pure bunk. These people think that AI "thinks" about what it is doing..;) Computers do not think!...
I liken it to a virus. Not a computer virus, but a biological virus.

Whether a virus is alive is debatable, but it's certainly a lot less complex than a cell. However, viruses can have extraordinary impacts on cells, organisms, behavior, populations, and even entire species! Yet, they're just a blob of proteins, lipids, enzymes, and genetic material. They're subject to the selective pressures of evolution, but there's not really much else which likens them to actual cells.

With AI, it doesn't need to be properly self-aware and reflective to learn how to do complex tasks. I think a lot of us assumed you needed a full, human-level intelligence to write computer programs, but you actually don't. It's not as complex a skill as we make it out to be. It's also very highly-structured knowledge, and that means once you develop an AI that's good at learning structured information, it can easily become competent at such tasks.
 
Last edited:
The difference is the timescale. If you listen to what he says, he thought we'd still have a few decades to figure out how to mitigate the threats. That's what changed.

Back in 2005, Ray Kurzweil was off in the other direction:
"$1,000 will buy computer power equal to a single brain 'by around 2020' while by 2045, the onset of the singularity, he says the same amount of money will buy one billion times more power than all human brains combined today."​

Kurzweil made the mistake of failing to account for the breakdown of Dennard scaling, which happened right around that time.
Right but that doesn’t absolve him of anything, he willingly designed an AI and punted solving the mitigation problem into the future. There is no “fixing it in post” outside of Hollywood.
 
D

Deleted member 2947362

Guest
Right but that doesn’t absolve him of anything, he willingly designed an AI and punted solving the mitigation problem into the future. There is no “fixing it in post” outside of Hollywood.
It's way to late for that they will never stop developing Ai

It's already to late for that, the race started ages ago and it only going to accelerate as time goes by.

We probs all be wiped out by it way before the developers reach a plateau with the technology because there always is someone/group or country somewhere got their own idea's what they can use it for which most of the time is not good lol
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Right but that doesn’t absolve him of anything, he willingly designed an AI and punted solving the mitigation problem into the future.
It's not like designing an atomic bomb, where the entire goal is to get to a point where we suddenly have to worry about global annihilation. That's a well-defined threshold, and there's no pretending you can accomplish your goal of having such weapons without opening pandora's box.

It's more like working on genetic engineering, which can be used for a great deal of good (e.g. curing genetic diseases, making drought-resistant crops, curing certain cancers, etc.), but at some point will usher in the era of human cloning and human gene editing. Nobody knows quite when that threshold is crossed, but everyone knows that's one of the eventual outcomes. It's an outcome that can likely be mitigated through public policy and perhaps other measures, and it's not unreasonable to think that we'd work on such problems as the potential draws near.

That's the calculation he made: let's try to improve AI for all the good that it can provide. As the risks become more salient, they will get the focus and attention they deserve and we will likely also have new technologies for mitigating them. Now, he sees that the risks were closer than he thought and it doesn't help that world-wide governance has been on the decline for probably at least a decade.

There is no “fixing it in post” outside of Hollywood.
That's why we're talking about it, and why he's speaking out. What he's doing now is trying to lend credibility to the concerns others have raised, and he even quit his job because of it. There's something to be said for trying to be a part of the solution to a problem you helped create.

Let's not pretend AI wouldn't have happened, if not for that guy. Maybe he accelerated progress by months or years, through his life's contributions, but you can't pin it all on him. The ideas of artificial neural networks go as far back as the 1950's, and interest in them has never completely gone away. They fell out of favor, for periods of time, but it was almost inevitable that people were going to harness the massive compute power of modern GPUs to revisit some of those topics.
 
Last edited:
D

Deleted member 2731765

Guest
The weird part of this malware is that we can't easily reverse-engineer it, due to ever changing nature of the code it embeds/uses.
 
D

Deleted member 2731765

Guest
How much harder would it be if you were facing facing hundreds or even thousands of different variants of that type of malware?
Yes, doomsday is upon us. As odd this may sound, AI will dominate the human race, sooner or later. :D AV companies should start winding up their business, because no permanent anti-malware threat detection code is going to help in future. Robots will take all the jobs worldwide.

AI can be dangerous if used by wrong hands. AMD and Nvidia can claim about the huge success and insane demand for AI bloom in the industry, but in long term this is going to backfire on LLMs, and everyone else.

Anyway, on a serious note, a recent series of proof-of-concept attacks show how a benign executable file can be crafted such that at every runtime, it makes an API call to ChatGPT.

Rather than just reproduce examples of already-written code snippets, ChatGPT can be prompted to generate dynamic, mutating versions of malicious code at each call, making the resulting vulnerability exploits difficult to detect by cybersecurity tools.

They built a simple proof of concept (PoC) exploiting a large language model to synthesize polymorphic keylogger functionality on-the-fly, dynamically modifying the benign code at runtime.
 
D

Deleted member 2947362

Guest
Yes, doomsday is upon us. As odd this may sound, AI will dominate the human race, sooner or later. :D AV companies should start winding up their business, because no permanent anti-malware threat detection code is going to help in future. Robots will take all the jobs worldwide.
LOL (y)
Rather than just reproduce examples of already-written code snippets, ChatGPT can be prompted to generate dynamic, mutating versions of malicious code at each call, making the resulting vulnerability exploits difficult to detect by cybersecurity tools.
This is what I mean, I have an understanding of the possibility's but I don't have the expertise to describe it lol
 
Last edited by a moderator:
Jun 19, 2023
1
0
10
This insidious malware uses a Python interpreter to regularly check ChatGPT online at CGPTonline.io for any new modules that can carry out malicious deeds. This clever tactic enables the malware to evade detection by disguising incoming payloads as simple text instead of suspicious binaries.
 
Last edited:
D

Deleted member 14196

Guest
AI will not dominate. the men who control it will dominate

Fiction has informed reality in the form of Dune, Battlestar Galactica and countless other science-fiction shows. The artificial intelligence will be used by evil forces against human beings.

This will happen, because apparently nobody is doing anything about it

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.
Frank Herbert

There is no governance over any of this, and even if there were having the rats guard the cheese isn’t the best policy
 
Last edited by a moderator:

sitehostplus

Honorable
Jan 6, 2018
404
163
10,870
That will be the Achilles heel of Chat GPT--the security side of the fence. People are really stupid and think AI can do things it wasn't programmed to do...;) The whole "AI can destroy the world" is pure bunk. These people think that AI "thinks" about what it is doing..;) Computers do not think!...They are trying to hype "AI"--to make people think it's something it isn't--because stupid, scared people are easy to manipulate. Be smarter than they are, and resist this marketing garbage.
I've been laughing at this line of thinking for months. Newsflash: The Terminator is a movie, not a prophecy book.

I need to figure out how to use AI in my investing. Might give me some good results.
 
D

Deleted member 2947362

Guest
Combine super advanced ai with quantum computers. Then watch out. Only a fool believes otherwise. It will get better

Many times fiction has informed reality

View: https://youtu.be/jgCHydvl2pk
I think you will find when they make this fiction stuff up they base it on stuff they that has most likely had some kind of theory and have an Idea of how it might work/looks/reacts etc all based on theory's.

and with films or click bait the sky is no limit
 
D

Deleted member 14196

Guest
Michio Kaku is fiction? Well, I guess some might consider his theory fiction lol.

You obviously have no idea who he is. The man is not an idiot. He’s the father of string theory. LOL.

So who am I going to listen to? Someone with the credentials or someone on the Internet lol you go figure it out.. When brilliant minds are very concerned, we should be too.
 

TJ Hooker

Titan
Ambassador
Michio Kaku is fiction? Well, I guess some might consider his theory fiction lol.

You obviously have no idea who he is. The man is not an idiot. He’s the father of string theory. LOL.

So who am I going to listen to? Someone with the credentials or someone on the Internet lol you go figure it out.. When brilliant minds are very concerned, we should be too.
  1. Actual scientists can and do write science fiction
  2. The Michio Kaku portion of your link is like 20 seconds out of a ~30 minute video, from a channel that states: "Disclaimer: Our channel is based on facts, rumors & fiction."
  3. The Kaku content they use is only a short snippet from a longer Joe Rogan interview. Kaku never explains or elaborates in that interview on why he thinks we should combination of quantum computing and AI would be bad, nor is it even clear he thinks it is bad at all. Quite the opposite, he goes on to frame it as a positive thing, stating that quantum computing hardware could act as an objective fact checker for AI chatbots (he doesn't explain how quantum computers would do this).
  4. His credentials are in theoretical physics. That doesn't necessarily make him an expert on AI. It doesn't even necessarily make him an expert on quantum computers. Edit: Assuming his expertise in one very specific area makes him an authority in other areas is an example of an Appeal from Authority logical fallacy. /Edit
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
  1. The Michio Kaku portion of your link is like 20 seconds out of a ~30 minute video, ...
Wow, thanks for checking that.

Did you even watch that video @Mandark , or were you just basing your statements on that title? Leaving aside the matter of TJ's other good points, you really shouldn't cite sources without checking them to be sure they say what you think they do.
 
  • Like
Reactions: TJ Hooker