The Rise of AI in Politics: From VIC to Campaign Promises
Perhaps the first AI contender appeared in the United States in the spring. A ChatGPT-based bot called Virtual Integrated Citizen (VIC), developed by real-life human Victor Miller, made a brief campaign pledge to run the city of Wyoming exclusively on artificial intelligence.
Generative AI and the 2024 Elections: Predictions vs. Reality
With over 2 billion voters across more than 60 countries at the start of 2024, many predicted that generative AI would be crucial to—and pose serious hazards to—democratic elections, even if it would not win government. Analysts and specialists, however, have since shifted their positions, stating that generative AI probably had little to no impact.
Did the predictions that 2024 would be the year of the AI election turn out incorrect? The truth isn’t. Although not in the manner that many had anticipated, experts who spoke to WIRED suggest that this may still have been the “AI election.”
Deepfakes: The Hype, Concerns, and Real Impact
To begin with, a lot of the excitement surrounding generative AI was centered on the danger of deepfakes, which scientists and commentators feared may contaminate the already hazy information landscape and deceive the general public.
“I believe that when it comes to AI, there was a lot of concern about deceptive deepfakes,” says Scott Brennen, director of New York University’s Center for Technology Policy. Brennen claims that because generative AI was difficult to deploy, many campaigns were reluctant to utilize it to produce deepfakes, especially of opponents. People in the US were concerned that they would violate a new set of state-level rules that limit “deceptive” deepfake commercials or mandate transparency when artificial intelligence is utilized in political ads.
“Given the wording of these laws, it’s kind of unclear what ‘deceptive’ means,” Brennen says, adding that no politician, campaign, or advertising wants to be a test case.
The Role of AI in Political Campaigns Across the Globe
WIRED started the AI Elections Project earlier this year to monitor the use of AI in elections worldwide. About half of the fakes weren’t necessarily meant to be misleading, according to a study of the WIRED AI Elections Project by Columbia University’s Knight First Amendment Institute. Deepfakes did not necessarily mislead or influence people’s beliefs, but they did widen partisan divides, according to a Washington Post report.
Much AI-generated content was utilized to show support or fandom for particular politicians. For example, an artificial intelligence (AI)–generated video of Elon Musk and Donald Trump dancing to the BeeGees song “Stayin’ Alive” was posted on social media millions of times, notably by Republican Senator Mike Lee of Utah.
Misleading AI Content: From Deceptive Fakes to Partisan Divides
“Social signaling is the key. This explains why people spread this information. It isn’t artificial intelligence. Public interest technologist and Harvard Kennedy School lecturer Bruce Schneier says, “You’re witnessing the consequences of a divided electorate.” “We haven’t had flawless elections in our history, and now there is AI and false information all of a sudden.”
Though not to be misled, deceptive fakes were circulated throughout this election. For example, fakes that urged followers of one of Bangladesh’s political parties to abstain from voting went viral online in the days leading up to the referendum. Deepfakes did become more prevalent this year, according to Sam Gregory, program director of the NGO Witness, which runs a rapid-response detection program for journalists and civil society organizations and assists people in using technology to protect human rights.
AI-Generated Content for Political Campaigning: Speeches, Ads, and Beyond
He states, “There have been examples of both real deceptive or confusing use of synthetic media in audio, video, and image format in multiple election contexts that have stumped journalists or that they have not been able to fully verify or challenge.” He claims that the tools and procedures that are now in place to identify AI-generated media are still falling behind the rate at which the technology is evolving. These detection technologies are even less accurate outside of the United States and Western Europe.
Thankfully, misleading AI was not widely used in most elections or crucial ways, but Gregory notes that it is evident that there is a lack of detection tools and access to them for those who need them most. “Now is not the moment for slackness.”
AI’s Subtle Influence: Translation, Strategy, and Canvassing
He claims that the “liar’s dividend”—the ability of politicians to claim that legitimate media is fraudulent—has resulted from the very existence of synthetic media. Donald Trump claimed in August that pictures of sizable crowds attending rallies for Vice President Kamala Harris were fabricated by artificial intelligence. (They weren’t.) According to Gregory, a third of the incidents included politicians employing artificial intelligence (AI) to refute proof of an actual occurrence, frequently involving leaked communications, out of all the reports sent to Witness’ deepfake rapid-response squad.
Brennen, however, asserts that the more noteworthy applications of AI in the last year have placed in less ostentatious, covert ways. Brennen explains, “A lot of AI was still going on behind the scenes, even though there were fewer deceptive deepfakes than many people were afraid of.” “I think there has been a lot more AI writing copy for speeches, emails, and sometimes advertisements.” Brennen thinks it’s not easy to determine the precise extent of the use of these tools because generative AI applications are not as consumer-facing as deepfakes.
The Future of AI in Politics: Inclusion and Accessibility
Schneier claims that artificial intelligence (AI) contributed significantly to the elections through “language translation, canvassing, assisting in strategy.”
During the Indonesian elections, a political consulting firm created campaign tactics and speeches using a program based on OpenAI’s ChatGPT. Prime Minister Narendra Modi translated his speeches into multiple Indian languages in real-time using artificial intelligence (AI) technologies. These applications of AI, according to Schneier, could benefit democracy as a whole by giving more people a sense of inclusion in the political process and facilitating small campaigns’ access to resources that would otherwise be unavailable.
He believes the impact on local candidates will be most significant. “Most campaigns in this nation are minimal. It’s a candidate for a position that might not even pay. Schneier calls AI solutions that might assist candidates with filing paperwork or interacting with voters “phenomenal.”
AI and Democracy: Opportunities and Challenges Ahead
“In repressive states, AI candidates and spokespeople can help protect opposition candidates and real people,” Schneier adds. Exiled Belarusian dissidents ran an AI candidate earlier this year as a protest symbol against the country’s last tyrant, President Alexander Lukashenko. Journalists and dissidents, together with their families, have been arrested by Lukashenko’s administration.
Additionally, generative AI businesses have joined the mix this year through US campaigns. Several campaigns received training from Google and Microsoft on how to use their products during the election.
Since these tools are still in their infancy, Schneier argues it might not be the year of the AI election. However, they are beginning.
Pingback: 2025 Law Changes: How AI and Abortion Laws Are Shaping USA