Artificial Intelligence

Latest Talkdesk research highlights voter concerns about AI’s impact on the US 2024 presidential election

Crystal Miceli Headshot

By Crystal Miceli

0 min read

Ai Impact Us 2024 Presidential Election

AI is nearly impossible to avoid, whether in AI-powered chatbots, AI-generated content in e-commerce or social media, or some other manifestation of the technology. New AI use cases continue to emerge almost daily, and the technology took a gigantic leap forward in late 2022 with the release of generative AI tools that can perform research and generate realistic text, audio, and visual content from simple prompts. But, while generative AI has demonstrably and positively impacted the quality, efficiency, and accuracy of customer experience and content creation, these platforms require safeguards and precautions to mitigate potential issues.

One issue is AI deepfakes, where an audio clip, photo, or video has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. Recently, online scammers used fake AI-generated videos of Taylor Swift endorsing Le Creuset products to lure unsuspecting consumers into sharing their personal information. Bad actors have also used AI technology to trick interactive voice response (IVR) systems that didn’t have the proper security measures in place and steal sensitive information. Responsible AI is critical to guaranteeing trust, and brands must create an ethical framework that complies with privacy regulations and ethical standards to ensure transparency and accountability.

While this isn’t a new issue, 2024 marks the first year AI-generated deepfakes and misinformation could play a role in the US presidential election because the technology is more widely available than ever. In the Talkdesk AI and the Election Survey, voters expressed significant concerns about how AI-generated content could place democracy at stake.



What are the overarching voter concerns?

With generative AI in play for the first time, 55% of Americans surveyed are concerned deepfakes will undermine the democratic process. Voters share many other concerns about AI’s impact on the US presidential election, from increased polarization to foreign and domestic election interference and everything in between.

  • 55% believe AI-driven content recommendations cause confirmation bias and contribute to political polarization.

  • 21% expect their vote to be swayed by election deepfakes and misinformation, and 31% worry they can’t reliably distinguish between real and fake election content.

  • 58% worry deepfakes will be used to discredit the election results.

  • 62% are concerned foreign governments will use AI deepfakes to influence election results in their favor.

  • 51% say they will lose trust in American democracy if deepfakes impact voting, with 14% saying they will never vote again, including 18% of Gen Z voters, many of whom are voting in a presidential election for the first time.



Election deepfakes and misinformation threaten the brand-consumer relationship.

It’s not just lawmakers and presidential candidates who have to worry about bad actors; brands must also be concerned about how their content is perceived among buyers and whether they’re doing enough to prevent cyber criminals from misusing AI to commit fraud. When building their strategies, CX leaders must account for the increase in deepfakes in an election year and the heightened impact on consumer trust.

  • 35% say election-related deepfakes and misinformation lead them to question all content they see online.

  • 68% say they’re more wary of how brands use AI to develop content.

Most Americans agree on three immediate measures companies should take to stop the spread of harmful AI usage:

  • Be vigilant in monitoring for and debunking false claims related to their brand (55% agree).

  • Deploy mitigation measures against harmful AI (44% agree). For example, deploy multi-factor authentication (MFA), such as a text with a one-time use code that confirms the user’s identity when they contact the brand, so if a bad actor uses a voice deepfake of a customer in an IVR system to try to steal their information, they won’t be successful.

  • Declare any AI-generated content (54% agree) as well as AI safety protocols (43% agree).



Culture or policy? The impact of deepfakes depends on the generation.

Numerous issues are on voters’ minds in any election cycle, and their importance heightens in a presidential year. Younger voters have different concerns than their older counterparts, but all brands must be aware that bad actors will take any opportunity to exacerbate polarization and create a more contentious political environment.

  • 32% of Millennials say false claims of a gas price hike would turn them off from voting, while false claims of country-based immigration bans would alienate 40% of Baby Boomers.

  • 15% of Gen Z voters would be most turned off by a candidate’s “plan” to ban The Tortured Poets Department, Taylor Swift’s latest album; meanwhile, only 5% of Millenials and 4% of Baby Boomers would be most turned off from voting for a candidate by those claims.



Many voters don’t double-check election content before sharing with friends and family.

A third of voters will rely on online information about candidates or political parties to make their voting decision. But…

  • 11% of voters who are aware of deepfakes admitted they don’t verify whether the news they use to make voting decisions is real or fake, including 21% of Gen Z.

  • 30% of Americans share political content with friends and family without confirming if it’s deepfaked. Despite being the digital-native generation, half of Gen Z voters (52%) share content without verification, while their Baby Boomer counterparts are the far less likely generation to do this (14%).



Who is responsible for putting a stop to deepfakes and misinformation?

Just as consumers demand that their favorite brands implement ethical AI policies and communicate proactively about how they’re using the technology, voters demand the same from legislators and presidential candidates.

  • 55% say the government should do more, such as use regulation, to stop the spread of AI-generated deepfakes and 51% think it should be illegal to make deepfakes.

  • 50% will evaluate candidates based on their efforts to proactively end the spread of election-related falsehoods and disinformation.



Safe, ethical AI is non-negotiable in 2024 (and beyond!).

In today’s hyper-competitive business and hyper-polarized political environments, monitoring and controlling how AI is developed and implemented is paramount. Whether it’s a global brand, a government agency, or a political candidate, everyone has a role in fostering a culture of trust. AI is here to stay and can be a potent tool when used properly. No organization or individual is too big to fail when it comes to responsible AI usage.

Customer experience is often the first place AI is adopted in an enterprise. Talkdesk has been serving up practical AI in the contact center since 2018 and is an expert at enabling brands to implement guardrails for ethical AI. Learn more about how Talkdesk empowers responsible AI usage through its new suite of generative AI features that make AI more responsible, accurate, and accessible in the contact center.

Email Signature Brand Awareness Campaign 1200x472

TALKDESK AI PLATFORM

Generative AI for the contact center.

Embrace a new era of automation and Intelligence  for ultimate cost savings and operational excellence with contact center solutions powered by Talkdesk AI.

Methodology

The survey was conducted for Talkdesk by Pollfish via an online questionnaire during April 2024. The sample was 1,000 individuals aged 18+ in the U.S. who are planning to vote in this year’s presidential election and are familiar with AI deepfakes.

SHARE

Crystal Miceli Headshot

Crystal Miceli

Crystal Miceli serves as Vice President of Product & Industry Marketing at Talkdesk, where she leads the execution of the product marketing strategy, including the development of defensibly differentiated product positioning & messaging – telling the story of the value that Talkdesk solutions provide to customers. Crystal has over 25 years of experience in marketing, product management, professional services, customer success and engineering at Cisco, Wal-Mart, BMC Software, CA Technologies (now Broadcom), Ivanti, Alida and others. Crystal holds a BS in Computer Science from Tulane University and an MBA from Louisiana State University. She volunteers for local causes and is a proud member of the Junior League of New Orleans.