Skip to content

Security

The Limitations of Watermarking: Why AI-Generated Image Detection is Essential

The World Economic Forum has ranked misinformation as the top global risk over the next two years, even above extreme weather events and war.

In the midst of this and other such concerns, watermarking has emerged as a popular technique to label and identify AI-generated content. Companies like Meta and OpenAI have adopted visible markers and invisible watermarks to indicate AI-generated images, audio, and video. While these efforts represent a step in the right direction, watermarking alone is insufficient. This blog post explores the limitations of watermarking and why robust AI-generated image detection technology remains essential.

The Glossily Rendered Elephant in the Room or: Why We are Building Our Own Models

With the accelerating rate of advancement in AI and its seeming integration into all things, the decision for many companies is whether or not to use in-house models to provide these services. At Nuanced, we aim to balance innovation, customer satisfaction, privacy and pricing. Providing our service to detect and identify AI-generated content, we chose to develop and run our models ourselves, which we believe will uphold our aforementioned commitments. There are a myriad of reasons as to why we made this decision, and we believe, moving forward, more and more companies may plan to do so themselves.

How Generative AI has transformed the online spam and abuse landscape

The emergence of generative AI has given rise to an alarming increase in AI-generated spam and abuse.

Products like ChatGPT, DALL-E, and GitHub Copilot have showcased remarkable content creation capabilities. Models underlying such technologies mirror their training material to craft content ranging from text, images, music, and code, often producing outcomes that can enhance many aspects of our lives. However, this very capability also opens the door to misuse, particularly in the generation of spam and abusive material.

Should platforms be required to identify and flag AI-generated content?

One frequently debated question is whether platforms should be required to identify and flag generative AI content.

Like many such questions, the answer to this one depends entirely on context. Since the identification of generative AI content is only useful insofar as it mitigates harm, a more pertinent question to ask is: what new risks and potential for harm does generative AI content create? It is then worth asking: how can those risks be mitigated?

Introducing Nuanced: Detecting Authenticity in the Age of AI

With AI-generated content rising, it has become vital to distinguish human-authored content from AI-generated impersonations across several contexts.

This is why we built Nuanced, a service for detecting AI-generated images. We help companies like dating apps, ad platforms, news sites, and marketplaces distinguish human-authored materials from AI-generated content.

image