News Security

Sophos Anticipates AI-Based Attack Techniques and Prepares Detections

Sophos

At the Moment, Adversaries are Skeptical of AI for Cybercrime, According to Sophos Research

Sophos released two reports on AI in cybercrime. The first, “The Dark Side of AI,” shows how scammers might exploit technology like ChatGPT for widespread fraud with minimal technical expertise. The second report, “Cybercriminals Can’t Agree on GPTs,” reveals that some cybercriminals, despite AI’s potential, are skeptical and hesitant to use large language models like ChatGPT for their attacks.

The Dark Side of AI

Using a simple e-commerce template and LLM tools like GPT-4, Sophos X-Ops was able to build a fully functioning website with AI-generated images, audio, and product descriptions, as well as a fake Facebook login and fake checkout page to steal users’ login credentials and credit card details. The website required minimal technical knowledge to create and operate, and, using the same tool, Sophos X-Ops was able to create hundreds of similar websites in minutes with one button.

“The reason we conducted this research was to get ahead of the criminals. By creating a system for large-scale fraudulent website generation that is more advanced than the tools criminals are currently using, we have a unique opportunity to analyze and prepare for the threat before it proliferates,” said Ben Gelman, senior data scientist, Sophos.

By creating a system for large-scale fraudulent website generation that is more advanced than the tools criminals are currently using, we have a unique opportunity to analyze and prepare for the threat before it proliferates,”

Ben Gelman, senior data scientist, Sophos

Cybercriminals Can’t Agree on GPTs

For its research into attacker attitudes towards AI, Sophos X-Ops examined four prominent dark web forums for LLM-related discussions. While cybercriminals’ AI use appears to be in its early stages, threat actors on the dark web are discussing its potential when it comes to social engineering.  Sophos X-Ops has already witnessed the use of AI in romance-based, crypto scams.

In addition, Sophos X-Ops found that the majority of posts were related to compromised ChatGPT accounts for sale and “jailbreaks”—ways to circumvent the protections built into LLMs, so cybercriminals can abuse them for malicious purposes. Sophos X-Ops also found ten ChatGPT-derivatives that the creators claimed could be used to launch cyber-attacks and develop malware. However, threat actors had mixed reactions to these derivatives and other malicious applications of LLMs, with many criminals expressing concern that the creators of the ChatGPT imitators were trying to scam them.

“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more skeptical than enthused. Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period.

Related posts

Akamai Launches Guardicore Platform

enterpriseitworld

investors.ai Launched to Help Founders

enterpriseitworld

Forcepoint Introduces AI-Based DSPM

enterpriseitworld
x