Politics

/

ArcaMax

4 ways AI can be used and abused in the 2024 election, from deepfakes to foreign interference

Barbara A. Trish, Grinnell College, The Conversation on

Published in Political News

The American public is on alert about artificial intelligence and the 2024 election.

A September 2024 poll by the Pew Research Center found that well over half of Americans worry that artificial intelligence – or AI, computer technology mimicking the processes and products of human intelligence – will be used to generate and spread false and misleading information in the campaign.

My academic research on AI may help quell some concerns. While this innovative technology certainly has the potential to manipulate voters or spread lies at scale, most uses of AI in the current election cycle are, so far, not novel at all.

I’ve identified four roles AI is playing or could play in the 2024 campaign – all arguably updated versions of familiar election activities.

The 2022 launch of ChatGPT brought the promise and peril of generative AI into public consciousness. This technology is called “generative” because it produces text responses to user prompts: It can write poetry, answer history questions – and provide information about the 2024 election.

Rather than search Google for voting information, people may instead ask generative AI a question. “How much has inflation changed since 2020?” for example. Or, “Who’s running for U.S. Senate in Texas?”

Some generative AI platforms such as Google’s AI chatbot Gemini, decline to answer questions about candidates and voting. Some, such as Facebook’s AI tool Llama, respond – and respond accurately.

But generative AI can also produce misinformation. In the most extreme cases, AI can have “hallucinations,” offering up wildly inaccurate results.

A CBS news account from June 2024 reported that ChatGPT had given incorrect or incomplete responses to some prompts asking how to vote in battleground states. And ChatGPT didn’t consistently follow the policy of its owner, OpenAI, and refer users to CanIVote.org, a respected site for voting information.

As with the web, people should verify the results of AI searches. And beware: Google’s Gemini now automatically returns answers to Google search queries at the top of every results page. You might inadvertently stumble into AI tools when you think you’re searching the internet.

Deepfakes are fabricated images, audio and video produced by generative AI and designed to replicate reality. Essentially, these are highly convincing versions of what are now called “cheapfakes” – altered images made using basic tools such as Photoshop and video-editing software.

The potential of deepfakes to deceive voters became clear when an AI-generated robocall impersonating Joe Biden before the January 2024 New Hampshire primary advised Democrats to save their votes for November.

After that, the Federal Communication Commission ruled that AI-generated robocalls are subject to the same regulations as all robocalls. They cannot be auto-dialed or delivered to cellphones or landlines without prior consent.

The agency also slapped a US$6 million fine on the consultant who created the fake Biden call – but not for tricking voters. He was fined for transmitting inaccurate caller-ID information.

While synthetic media can be used to spread disinformation, deepfakes are now part of the creative toolbox of political advertisers.

One early deepfake aimed more at persuasion than overt deception was an AI-generated ad from a 2022 mayoral race contest portraying the then-incumbent mayor of Shreveport, Louisiana, as a failing student summoned to the principal’s office.

The ad included a quick disclaimer that it was a deepfake, a warning not required by the federal government, but it was easy to miss.

Wired magazine’s AI Elections Project, which is tracking uses of AI in the 2024 cycle, shows that deepfakes haven’t overwhelmed the ads voters see. But they have been used by candidates across the political spectrum, up and down the ballot, for many purposes – including deception.

Former President Donald Trump hints at a Democratic deepfake when he questions the crowd size at Vice President Kamala Harris’ campaign events. In lobbing such allegations, Trump is attempting to reap the “liar’s dividend” – the opportunity to plant the idea that truthful content is fake.

Discrediting a political opponent this way is nothing new. Trump has been claiming that the truth is really just “fake news” since at least the “birther” conspiracy of 2008, when he helped to spread rumors that presidential candidate Barack Obama’s birth certificate was fake.

 

Some are concerned that AI might be used by election deniers in this cycle to distract election administrators by burying them in frivolous public records requests.

For example, the group True the Vote has lodged hundreds of thousands of voter challenges over the past decade working with just volunteers and a web-based app. Imagine its reach if armed with AI to automate their work.

Such widespread, rapid-fire challenges to the voter rolls could divert election administrators from other critical tasks, disenfranchise legitimate voters and disrupt the election.

As of now, there’s no evidence that this is happening.

Confirmed Russian interference in the 2016 election underscored that the threat of foreign meddling in U.S. politics, whether by Russia or another country invested in discrediting Western democracy, remains a pressing concern.

In July, the Department of Justice seized two domain names and searched close to 1,000 accounts that Russian actors had used for what it called a “social media bot farm,” similar to those Russia used to influence the opinions of hundreds of millions of Facebook users in the 2020 campaign. Artificial intelligence could give these efforts a real boost.

There’s also evidence that China is using AI this cycle to spread malicious information about the U.S. One such social media post transcribed a Biden speech inaccurately to suggest he made sexual references.

AI may help election interferers do their dirty work, but new technology is hardly necessary for foreign meddling in U.S. politics.

In 1940, the United Kingdom – an American ally – was so focused on getting the U.S. to enter World War II that British intelligence officers worked to help congressional candidates committed to intervention and to discredit isolationists.

One target was the prominent Republican isolationist U.S. Rep. Hamilton Fish. Circulating a photo of Fish and the leader of an American pro-Nazi group taken out of context, the British sought to falsely paint Fish as a supporter of Nazi elements abroad and in the U.S.

Acknowledging that it doesn’t take new technology to do harm, bad actors can leverage the efficiencies embedded in AI to create a formidable challenge to election operations and integrity.

Federal efforts to regulate AI’s use in electoral politics face the same uphill battle as most proposals to regulate political campaigns. States have been more active: 19 now ban or restrict deepfakes in political campaigns.

Some platforms engage in light self-moderation. Google’s Gemini responds to prompts asking for basic election information by saying, “I can’t help with responses on elections and political figures right now.”

Campaign professionals may employ a little self-regulation, too. Several speakers at a May 2024 conference on campaign tech expressed concern about pushback from voters if they learn that a campaign is using AI technology. In this sense, the public concern over AI might be productive, creating a guardrail of sorts.

But the flip side of that public concern – what Stanford University’s Nate Persily calls “AI panic” – is that it can further erode trust in elections.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Barbara A. Trish, Grinnell College

Read more:
The government has a target for Indigenous digital inclusion. It’s got little hope of meeting it

AI is creeping into the visual effects industry – and it could take the human touch out of film and TV

‘Suicide for democracy.’ What is ‘bothsidesism’ – and how is it different from journalistic objectivity?

Barbara A. Trish does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


 

Comments

blog comments powered by Disqus

 

Related Channels

ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Christine Flowers

Christine Flowers

By Christine Flowers
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
Joe Guzzardi

Joe Guzzardi

By Joe Guzzardi
John Micek

John Micek

By John Micek
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Michael Reagan

Michael Reagan

By Michael Reagan
Mona Charen

Mona Charen

By Mona Charen
Oliver North and David L. Goetsch

Oliver North and David L. Goetsch

By Oliver North and David L. Goetsch
R. Emmett Tyrrell

R. Emmett Tyrrell

By R. Emmett Tyrrell
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Bob Englehart Jimmy Margulies Jeff Danziger Bob Gorrell Ed Gamble Dick Wright