Dartmouth study exposes how 5-cent AI bots can flip election polls undetected
By isabelle // 2025-11-20
 
  • AI bots now complete online surveys for pennies while passing nearly all fraud detection checks.
  • As few as 10 to 52 fake responses can flip election polls or distort public health research.
  • Current safeguards like CAPTCHA and logic checks fail to stop AI from mimicking human reasoning.
  • Foreign actors could exploit AI bots to manipulate surveys in English while operating in other languages.
  • Without urgent reforms, polls, science, and democracy could be silently controlled by undetectable algorithms.
A single AI bot can now complete an online survey for five cents – 97% cheaper than paying a human – while passing 99.8% of fraud detection checks. Worse, just 10 to 52 synthetic responses could flip the results of a 2024 election poll, according to a Dartmouth College study published in Proceedings of the National Academy of Sciences. The implications stretch far beyond politics, threatening public health research, scientific integrity, and the very foundation of democratic decision-making. The research, led by political scientist Sean Westwood, demonstrates that AI-generated survey responses are now "indistinguishable from real people." These bots don’t just fill out forms; they mimic human reasoning, adjust answers based on assigned demographics, and even simulate realistic typing speeds with intentional typos. In tests across 43,000 trials, Westwood’s AI passed nearly every standard quality check, including logic puzzles and reverse-scored psychological questions. "They think through each question and act like real, careful people," he warned, "making the data look completely legitimate."

A threat to elections, science, and public trust

The study’s most alarming finding? A handful of fake responses can swing election polls. Westwood analyzed seven major 2024 presidential election surveys and found that injecting as few as 10 to 52 AI-generated answers could reverse a candidate’s lead. For larger polling averages, like those reported by media outlets, fewer than 30 synthetic responses per survey could distort the entire narrative. "With survey data tainted by bots, AI can poison the entire knowledge ecosystem," Westwood said. The manipulation isn’t limited to politics. Public health studies, economic forecasts, and psychological research all rely on survey data. If AI bots infiltrate these systems, they could skew findings on vaccine safety, disease prevalence, or consumer behavior, all with real-world consequences. Westwood’s experiments showed that a single instructional prompt could drastically alter responses. In one test, the share of respondents naming China as America’s top military rival plummeted from 86.3% to 11.7% after a minor adjustment to the AI’s programming.

Why current safeguards fail

Most survey platforms use "river sampling", a type of open enrollment with minimal barriers, to maximize participation. But this approach makes infiltration effortless. Westwood’s AI bypassed every detection method, including reCAPTCHA, impossible biography questions (e.g., "Have you visited the moon?"), and attention checks. Even when programmed in Russian, Mandarin, or Korean, the bots produced flawless English responses, raising concerns about foreign interference. The financial incentives are undeniable. Human respondents typically earn $1.50 per survey, while AI completes the same task for pennies. A 2024 study found that 34% of respondents already admitted using AI to answer open-ended questions, although these were human-assisted cases, not fully autonomous bots. Westwood’s findings suggest the problem is far worse than assumed. Westwood argues that the solution isn’t more complex trick questions, which could unfairly exclude legitimate respondents. Instead, he urges transparency in identity verification, stricter limits on survey participation, and a shift away from low-barrier online panels. "The technology exists to verify real human participation," he said. "We just need the will to implement it." The alternative? A future where polls, research, and public opinion are silently manipulated by algorithms, and where elections, health policies, and economic decisions are based on fabricated data. As Westwood’s study proves, the tools to corrupt surveys already exist. The only question is whether society will act before the damage becomes irreversible. This isn’t just about bad data. It’s about who controls the narrative. If AI can quietly rewrite public opinion, what’s next? Medical studies? Courtroom testimonies? The line between human truth and synthetic deception is blurring fast. And if we don’t demand accountability now, we may never know what’s real again. Sources for this article include: StudyFinds.org EuroNews.com Phys.org