Skip to content

AI’s hidden threat to election day

As election day unfolds, the concern around deepfakes and AI-generated images centers around how they might manipulate public opinion. While it's true that AI-generated images and videos can mislead voters by showing candidates doing or saying things they never did, there's a subtler yet equally urgent risk that isn’t grabbing headlines: AI-driven misinformation that confuses voters on the logistics of voting itself.

Research shows that it is much more difficult to change people’s political beliefs outright, but much easier to nudge their behaviors—whether that’s influencing where, when, or even if they cast their vote. Misinformation, particularly around outdated or incorrect polling locations, has led to confusion for many voters about where and how to cast their ballots. Recent studies indicate that AI chatbots, often used to respond to voter inquiries, are providing inaccurate information at alarmingly high rates—over 50% in some cases.

There is no doubt that deepfakes and other synthetic media can misrepresent candidates' policies and platforms. However, a quieter form of interference that leads to inaccurate logistical information can also contribute to a breakdown of trust throughout the electoral process.

Here's what we can do:

  1. Invest in AI-Detection: Making sure election information is fact-checked and verified can help prevent confusion. Tools like those developed at Nuanced help verify the authenticity of visual media, ensuring that manipulated images don’t reach voters unchecked.
  2. Strengthening media literacy: It’s going to become harder to tell AI-generated content from the real thing, but educating the public on how to spot synthetic media will empower people to navigate this digital landscape with more confidence.