Skip to content

Should platforms be required to identify and flag AI-generated content?

One frequently debated question is whether platforms should be required to identify and flag generative AI content.

Like many such questions, the answer to this one depends entirely on context. Since the identification of generative AI content is only useful insofar as it mitigates harm, a more pertinent question to ask is: what new risks and potential for harm does generative AI content create? It is then worth asking: how can those risks be mitigated?

Context matters

There are contexts that require disclosure about whet is AI-generated or not, in which case identification is necessary. For example, if a video posted to a platform contains sexually explicit content, is the problem that it was created by generative AI, or is the problem that the content violates policy, irrespective of how it was created? In this case, the content would be considered problematic no matter what. However, if a video contains child pornography, knowing whether or not it is AI-generated marks the difference between whether or not a minor is being exploited, thereby guiding how law enforcement would have to get involved.

Of course, no one can perfectly predict which contexts will require disclosing whether generative AI was used. Using generative AI to support content creation may be harmless. However, if that content is being used to propagate misinformation, the implications become more significant.

Identifying properties that make evasion possible

Another way the disclosure of the generation method is useful is if the fact that a piece of content is AI-generated is the reason it is increasing harm. For example, if the generation process gives content unique characteristics that make it better at evading traditional detection methods, and knowing whether it is generative AI content could improve overall detection and mitigation capabilities, then it is useful to demarcate that.

AI is morally neutral

It is unhelpful to moralize generative AI and enforce identification on all platforms. This is because the way in which these risks manifest depends entirely on the policies specific to that platform. Ultimately, LLMs, diffusion models, and other such architectures are just a new set of interfaces, similar to how innovations in video production or media provide new interfaces or abstraction layers that engineers, innovators, technologists, and content creators can use to generate new digital artifacts.

Generative AI as an abstraction

When considering solutions, one framework to apply to this problem may be to think about generative AI as an abstraction. Similar to other models in computing, such as networking or programming language design, abstraction layers provide boundaries between a system’s constituent parts, making it easier to understand and reason about. In the context of abuse mitigation, evaluating the risk of generative AI applications across different abstraction layers can help clearly define where it is acceptable and harmless to use for full automation, as compared with situations in which human oversight is necessary. For example, generative AI poses little risk when removing the grunt work of tedious tasks such as data entry or video editing, whereas the risk is significantly greater for AI healthcare systems using it to provide critical diagnoses.