On a SERIOUS note, suppose if ChatGPT or other comparable AI tools are weaponized to create polymorphic malware, the future of cyber threats could become increasingly intricate and difficult to mitigate. One potential implication which can't be overlooked: "Ethical concern", which nobody is talking about.
Also, speaking of ChatGPT's dangerous potential to create Malware, in context, some researchers discovered multiple instances of hackers trying to bypass IP, payment card, and phone number safeguards.
Hackers are also exploiting the workflow tool capabilities of ChatGPT to improve phishing emails and associated fake websites that mimic legitimate sites to improve their chances of success. So much ado for an AI ?