“Inspiring Technology: 34 Breakthroughs”: Download IEEE’s 140th anniversary book for FREE.

Close bar

Cybercrime Meets ChatGPT: Look Out, World

Misused chatbot could create customized malware and whole new cybersecurity threats

3 min read
illustration of a text bubble with red x pointing to a laptop with chat on screen against a red background
iStock

The world is abuzz with what ChatGPT is capable of. Sure, it answers both mundane and philosophical questions, it writes code and debugs it and even could help screen for Alzheimer’s. But as with every new technology, the AI-powered chatbot by OpenAI is at risk of being misused.

Researchers from Check Point Software found that ChatGPT could be used to create phishing emails. Combined with Codex, a natural language-to-code system also by OpenAI, ChatGPT could then be used to develop and inject malicious code. “Our researchers built a full malware infection chain starting from a phishing email to an Excel document that has malicious VBA [Visual Basic for Application] code. We can compile the whole malware to an executable file and run it in a machine,” says Sergey Shykevich, threat intelligence group manager at Check Point Software. He adds that ChatGPT mostly produces “much better and more convincing phishing and impersonation emails than real phishing emails we see in the wild now.”

ChatGPT “will allow more people to be coders, but the biggest risk is that more people could become malware developers.”
—Sergey Shykevich, Check Point Software

Yet iteration is key when it comes to ChatGPT. “On the code side, the first output wasn’t perfect,” Shykevich says. “I would compare how I use it to Google Translate, where the output will mostly be good. But I will review that and make some corrections or adjustments. The same happens with ChatGPT where you can’t use the code exactly as is and small adjustments need to be made.”

Lorrie Faith Cranor, director and Bosch Distinguished Professor of the CyLab Security and Privacy Institute and FORE Systems Professor of computer science and of engineering and public policy at Carnegie Mellon University, echoes this sentiment. “I haven’t tried using ChatGPT to generate code, but I’ve seen some examples from others who have. It generates code that is not all that sophisticated, but some of it is actually runnable code,” she says. “There are other AI tools out there for generating code, and they are all getting better every day. ChatGPT is probably better right now at generating text for humans, and may be particularly well suited for generating things like realistic spoofed emails.”

The researchers have also identified hackers using ChatGPT to develop malicious tools, such as an information stealer and a dark web marketplace. “[ChatGPT] will allow more people to be coders, but the biggest risk is that more people could become malware developers,” says Shykevich.

“I think to use these [AI] tools successfully today requires some technical knowledge, but I expect over time it will become easier to take the output from these tools and launch an attack,” Cranor says. “So while it is not clear that what the tools can do today is much more worrisome than human-developed tools that are widely distributed online, it won’t be long before these tools are developing more sophisticated attacks, with the ability to quickly generate large numbers of variants.”

Further complications could arise from the lack of ways to detect if malicious code was created with the help of ChatGPT. “There is no good way to pinpoint that a specific software, malware, or even phishing email was written by ChatGPT because there is no signature,” Shykevich says.

For its part, OpenAI is working on a method to “watermark” the outputs of GPT models, which can later be used to prove that they were produced by AI instead of humans. Shykevich also notes that after Check Point Software published its findings, researchers found it was no longer possible to generate phishing emails using ChatGPT.

To protect from these AI-generated threats, Shykevich advises companies and individuals to have the appropriate cybersecurity measures in place. Current safeguards still apply, and it’s vital to continue updating and strengthening these implementations.

“Researchers are also working on ways to use AI to discover code vulnerabilities and detect attacks,” Cranor says. “Hopefully, advances on the defensive side will be able to keep up with advances on the attacker side, but that remains to be seen.”

While AI-backed systems like ChatGPT have immense potential to change how humans interact with technology, they also pose risks—especially when used in dangerous ways.

“ChatGPT is a great technology and has the potential to democratize AI,” says Shykevich. “AI was kind of a buzzy feature that only computer science or algorithmic specialists understood. Now, people who aren’t tech savvy are starting to understand what AI is and trying to adopt it in their day-to-day. But the biggest question, is how would you use it—and for what purposes?”

The Conversation (0)