Skip to content

AI Governance Alone Won't Save Us

On October 30, 2023, the Biden Administration issued an executive order proposing safeguards to mitigate concerns around AI-generated content. One such protection proposed instituting watermarking and other such indicators to denote the authenticity of content, with the intent of preventing deception and fraud.

Having a verifiable signature coming from federal agencies, in theory, sounds helpful at demarcating what is legitimate versus what may be misinformation. However, there are a few concerns with this proposal: (1) this signature must be truly verifiable, meaning it must be resistant to spoofing and other impersonation tactics, and (2) the proposal to extend such regulation to the private sector implies that the government should preside as the authority on a wider proportion of communications. This further insinuates that the government should, in practice, control all information and messaging––something prohibited in the United States by the First Amendment.

The idea that the government ultimately controls what content is considered factual necessitates having complete trust in said government as an objective body. But all entities, be they corporations or governments, have their own vested interests and fall victim to their own oversights and errors. This is why a centralized authority on misinformation can diminish the voice of citizens and reduce their ability to hold these systems accountable when they do falter.

Protecting online safety without compromising the democratization of digital content

The Internet has democratized the creation and propagation of information. While the ability for virtually anyone to participate carries the risk of misinformation, filtering content through a centralized institution does not remove this risk. In fact, entrusting the government to be some objective moral purveyor of AI-generated content creates greater risks of censorship––again, prohibited in the United States via the First Amendment. How do we know the government, or corporations verified by the government, will not engage in misinformation en masse? Furthermore, what does “safety” mean, and whose safety is being protected?

Such questions are important to ask when considering alternatives that protect user safety and mitigate misinformation and disinformation risks without compromising the democratization of digital content.

A more principled approach may be to start by aligning on what safety and security mean. In her paper, Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems, Heidy Khlaaf does a great job of establishing fundamental terminology in this context that can provide the foundation for more nuanced conversations going forward.