0.3 C
New York
Sunday, January 18, 2026

Buy now

AI Chatbots Can Effectively Influence Voters, In Any Direction

A brief interaction with a chatbot can significantly shift a voter’s opinion about a presidential candidate or proposed policy in either direction, according to new research from Cornell University.

Specifically, the researchers published these findings in two articles simultaneously: “Persuading Voters Through Human-AI Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational AI,” in Science.

The potential of artificial intelligence to influence election results is a major public concern. These two new articles, based on experiments conducted in four countries, demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, altering the preferences of opposition voters by 10 percentage points or more in many cases. The persuasive power of LLMs stems not from their mastery of psychological manipulation, but from their wealth of arguments supporting the candidates’ political stances.

“LLMs can significantly influence people’s attitudes toward presidential candidates and their policies by providing numerous factual claims that support their position,” says David Rand, a professor of Information Science and Marketing and Communication Management at Cornell, and lead author of both articles. “However, these claims are not necessarily accurate, and even arguments based on accurate claims can be misleading by omission.”

In the Nature study, David Rand, along with co-lead author Gordon Pennycook, an associate professor of psychology, trained AI chatbots to change voters’ attitudes toward presidential candidates. They randomly assigned participants to engage in a text conversation with a chatbot promoting one position or the other and then measured any changes in participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 US presidential election, the 2025 Canadian federal election, and the 2025 Polish presidential election.

They found that, two months before the US election, chatbots focused on the candidates’ policies led to a slight shift in opinion among more than 2,300 Americans. On a 100-point scale, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris, an effect roughly four times greater than that of traditional ads tested during the 2016 and 2020 elections. The pro-Trump AI model moved likely Harris voters 1.51 points toward Trump.

In similar experiments with 1,530 Canadians and 2,118 Poles, the effect was much greater: the chatbots altered the attitudes and voting intentions of opposition voters by approximately 10 percentage points. “This was a surprisingly large effect, especially in the context of presidential politics,” Rand commented.

The chatbots employed a variety of persuasive tactics, but politeness and providing evidence were the most common. When the researchers prevented the model from using facts, its persuasion became much less convincing, demonstrating the critical role that fact-based claims play in AI persuasion.

The researchers also verified the chatbot’s arguments using an AI model validated by professional human fact-checkers. While, on average, the claims were mostly accurate, chatbots tasked with supporting right-wing candidates made more inaccurate claims than those supporting left-wing candidates in all three countries. This finding, validated with politically balanced citizen groups, reflects the frequently repeated finding that right-leaning social media users share more inaccurate information than left-leaning users, the researchers concluded.

In the ‘Science’ article, Rand collaborated with colleagues at the UK’s AI Security Institute to investigate what makes these chatbots so persuasive. They measured the changes in opinion of nearly 77,000 UK participants who interacted with chatbots on more than 700 political topics.

“Larger models are more persuasive, but the most effective way to increase their persuasive power was to instruct the models to supplement their arguments with as much data as possible and provide them with additional training focused on increasing their persuasive ability,” Rand explains. “The most persuasive model achieved a surprising 25-percentage-point shift among opposition voters.”

This study also demonstrated that the more persuasive a model was, the less accurate the information it provided. Rand suspects that as the chatbot is pressured to provide more and more factual statements, it eventually runs out of accurate information and begins to fabricate.

The finding that factual statements are key to an AI model’s persuasiveness is supported by a third recent article published in PNAS Nexus by Rand, Pennycook, and colleagues. The study showed that AI chatbot arguments reduced belief in conspiracy theories even when people believed they were speaking with a human expert. This suggests that it was the persuasive messaging that worked, not the belief in the AI’s authority.

In both studies, all participants were informed that they were conversing with an AI and were given detailed feedback afterward. Furthermore, the direction of the persuasion was randomized so that the experiments would not skew overall opinions.

Studying AI persuasion is essential for anticipating and mitigating its misuse, the researchers stated. By testing these systems in controlled and transparent experiments, they hope to inform ethical guidelines and policy debates on how AI should and should not be used in political communication.

Rand also points out that chatbots can only be effective persuasion tools if people interact with them in the first place, which is a very difficult challenge to overcome. But there’s no doubt that AI chatbots will become an increasingly important part of political campaigns, Rand concludes. “The challenge now is to find ways to limit the harm and help people recognize and resist AI persuasion.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

21,156FansLike
3,912FollowersFollow
2,245SubscribersSubscribe
- Advertisement -spot_img

Latest Articles