ChatGPT has generated a lot of excitement in the tech industry over the last few months, and not all of it has been positive. Now, someone claims to have created powerful data-mining malware in just a few hours utilizing ChatGPT-based prompts. Here’s what we know so far.

Who is accountable for this malware?

Aaron Mulgrew, a security researcher at Forcepoint, has described how he used OpenAI’s generative chatbot to create ransomware. Despite the ChatGPT having multiple safeguards in place to prevent the creation of malicious code, Aaron was able to exploit a flaw and develop the ransomware anyway.

Aaron successfully created a sophisticated data-stealing program by instructing the ChatGPT to generate the code function by function and line by line. Once he compiled all the component functions, he realized that he had developed a malware that was as advanced as any nation-state malware.

The fact that Mulgrew was able to develop such a dangerous malware without the help of a hacking team and without having to write the code himself is extremely concerning. It highlights the ease with which malicious actors could potentially create and distribute malware using automated tools like generative chatbots.

ChatGPT as a cyber weapon

What exactly does the malware do?

The malware disguises itself as a screensaver software and then launches itself on Windows devices. Once on a device, it will search through all types of files, including Word documents, pictures, and PDFs, seeking any data it can steal from the device.

After gaining access to the data, the malware can fragment it into smaller components and hide them inside other photos stored on the device. Subsequently, these photos are transferred to a Google Drive folder to evade detection. Mulgrew was able to make his code highly robust and resistant to detection by using straightforward guidelines on ChatGPT. As a result, the malware became highly potent.

How does this affect ChatGPT?

Although this was all done in a private test by Mulgrew, and the virus is not assaulting anyone in public, it is incredibly frightening to learn about the dangerous deeds that can be committed utilizing ChatGPT. Mulgrew said he had no advanced coding skills, but the ChatGPT safeguards were nevertheless insufficient to prevent his test. Given the potential severity of the threat, it is essential that OpenAI take swift and decisive action to bolster the security measures in place so that no malicious actor can replicate Mulgrew’s actions.


Rhyno delivers a range of activities that combine to fully protect your infrastructure and data from cybercriminals, anywhere and everywhere, 24/7/365.


About Rhyno Cybersecurity Services

Rhyno Cybersecurity is a Canadian-based company focusing on 24/7 Managed Detection and Response, Penetration Testing, Enterprise Cloud, and Cybersecurity Solutions for small and midsize businesses.

Our products and services are robust, innovative, and cost-effective. Underpinned by our 24x7x365 Security Operations Centre (SOC), our experts ensure you have access to cybersecurity expertise when you need it the most.

This website uses cookies to improve your online experience. By continuing, we will assume that you are agreeing to our use of cookies. For more information, visit our Cookie Policy.

Privacy Preference Center