Skip to content

How Generative AI has transformed the online spam and abuse landscape

The emergence of generative AI has given rise to an alarming increase in AI-generated spam and abuse.

Products like ChatGPT, DALL-E, and GitHub Copilot have showcased remarkable content creation capabilities. Models underlying such technologies mirror their training material to craft content ranging from text, images, music, and code, often producing outcomes that can enhance many aspects of our lives. However, this very capability also opens the door to misuse, particularly in the generation of spam and abusive material.

AI-generated abuse is flooding all corners of online life, from news sites to dating apps. The misinformation tracking site, NewsGuard, identified 603 news sites that contained AI-generated content without human oversight or verification, with some sites generating 1200 articles a day. Content farms have also been reported to have been using ChatGPT to abuse targeted ad systems, using clickbait headlines engineered to entice onlookers to follow spammy links, flood shopping reviews, and contribute to harassment. Because ChatGPT’s capabilities allow it to produce content at a scale and speed that far surpass traditional methods, it can be harnessed to intensify harassment and cyberbullying by generating large volumes of harmful content quickly and persistently.

AI has also played a role in more extreme malicious activity, such as the creation and dissemination of sexually explicit material. In Quebec, a man who created synthetic child pornography using AI was sentenced to prison. Other examples of “sextortion” include deepfake pornography, often used by scammers to demand payment from the victims they target.

Challenges of AI-generated abuse: human verification and scale

While examples of misuse vary across a range of categories and a spectrum of severity, one of the clearest challenges of AI-generated abuse involves discerning it from human-authored content.

In August 2002, Paul Graham authored an essay titled A Plan for Spam. I highly recommend reading it, because it offers a window into just how far spam has evolved in the span of 21 years. My favourite line from the essay underscores exactly why existing methods fall short in a modern context: “If you hired someone to read your mail and discard the spam, they would have little trouble doing it.” In 2023, spam detection is not as obvious.

This is because unlike historic examples of phishing and spam which were filled with clear markers, containing copious spelling and grammar errors or long, unwieldy strings, ChatGPT-generated content is flawlessly written and often indistinguishable from human-authored materials. It is because of this challenge that there is a higher risk to both content and user integrity.

The challenge of identifying AI-generated content is compounded by the fact that the technology is capable of producing high volumes of content rapidly, making existing systems unprepared to tackle this new wave of spam and abuse.

Future prospects and solutions

Just like there is no silver bullet to online safety issues at large, there is no single solution to the potential harm increased by AI. Instead, a number of solutions must be considered, ones that are context-aware and relevant to individual platforms. This involves taking a multi-faceted approach involving adapting the role of policy and ethical guidelines, in addition to evolving technologies aimed at monitoring and mitigation, all while acting with accountability and transparency.

Since discerning what is real versus what isn’t seems to be at the heart of a new class of abuse challenges, one may wonder whether platforms should be required to have services that identify and flag generative AI content. Firstly, the definition of what is “real” must be clarified. In some cases, AI-generated avatars, characters, and influencers may be socialized and deemed acceptable, while other cases may require stricter identity verification to assert that an actor is human, such as AI-generated dating app profiles used to run romantic scams, or inauthentic digital clones used for job applications. This requires considerations that are both platform and context-specific.

While not every context warrants knowing whether something was generated using AI or not, certain situations demand it. For instance, the production and dissemination of child pornography is a criminal offense, and violates policy regardless of whether it is generated by AI or not. However, identifying whether something like that is AI-generated or not can vary law enforcement outcomes drastically depending on whether or not an actual child is being exploited in that context.

A double-edged sword

As we marvel at the creative prowess of these tools to mimic human-authored material, we must also ensure careful management in the deployment of generative AI, alongside ethical considerations that result in responsible use. While benign manipulations may yield hyper-realistic art, more concerning uses may involve the generation of deepfakes, the deployment of sophisticated phishing attacks, the spread of misinformation and disinformation, in addition to impersonation, identity theft, and increased privacy risks. This is why a Nuanced approach, one that considers the problem from multiple angles, is necessary for harnessing AI’s capabilities for constructive ends, while minimizing destructive outcomes.