A recent poll suggests that the majority of U.S. adults harbor concerns about the potential role of Artificial Intelligence (AI) in propelling misinformation during the 2024 presidential election. Their apprehensions revolve around the ability of AI to micro-target audiences, produce convincing messages, and generate fake imagery swiftly.
Public fear of AI-induced misinformation
Artificial Intelligence (AI) tools, capable of micro-targeting political audiences, mass-producing persuasive messages, and generating fake imagery in a flash, are a source of concern for a significant number of American adults. The poll revealed that 58% of those surveyed predict an increase in the dissemination of false and misleading information during the upcoming elections due to these AI tools.
Despite the relatively low usage of AI chatbots and image generators among American adults – only 30% having used such tools – and less than half (46%) having some knowledge about AI tools, the consensus is quite clear: presidential candidates should abstain from using AI. Despite the potential benefits of AI, the public is wary of its misuse, particularly in the political sphere.
Specific AI uses in politics rejected by public
The public's disapproval extends to various potential uses of AI by presidential candidates. A sizable majority – 83% – view the creation of false or misleading media for political ads as negative. Similarly, using AI to edit or enhance photos/videos for political ads (66%), micro-targeting political ads to individual voters (62%), and answering voters' queries via chatbots (56%) were also seen as undesirable practices.
Potential regulation of AI in political ads
In response to the growing unease surrounding the use of AI, the Federal Election Commission is deliberating over a petition that calls for the regulation of AI-generated deepfakes in political advertisements. This initiative signals a potential turning point in the combat against AI-propelled misinformation in the lead-up to the 2024 elections.
Public demand for action against AI misinformation
Public opinion leans heavily towards government and tech companies taking decisive action against AI-generated misinformation. Roughly two-thirds of those surveyed support a government ban on AI-generated content containing false or misleading images in political advertisements. A similar proportion of respondents also advocate for tech companies to label all AI-generated content on their platforms, thereby promoting transparency and accountability.
The task of preventing the spread of AI-generated misinformation in the 2024 presidential elections is seen as a collective responsibility by most Americans. The poll shows that 63% believe tech companies that create AI tools should shoulder a large part of this responsibility. However, nearly half of the respondents also place a significant share of the duty on news media (53%), social media companies (52%), and the federal government (49%). This shared responsibility underscores the multifaceted approach required to tackle this complex issue.