Experts have raised concerns over the potential misuse of AI-generated and enhanced images in politics, warning of threats to democratic processes. Urgent action, they argue, is needed to regulate their use, with particular emphasis on transparency in upcoming major elections.
Rising concern over AI-manipulated images
There's growing concern about the use of AI-manipulated images in the political sphere. This was recently highlighted when a Labour MP faced backlash for sharing an altered image of Prime Minister Rishi Sunak. While it's still unclear whether AI was used in this specific instance, it's widely known that AI tools make it easier to produce convincing fakes. This incident has sparked a broader debate, with experts warning that without proper regulation, such manipulations could pose a real threat to democratic processes.
Experts like Wendy Hall, a regius professor of computer science at the University of Southampton, are speaking out about the potential risks posed by digital technologies, including AI. Hall suggests that their misuse should be considered a top risk to democratic processes. With major elections on the horizon in the UK and the US, it's crucial to address these concerns swiftly to ensure a fair and impartial process.
Need for ethical principles and regulations
Shweta Singh, an assistant professor of information systems and management at the University of Warwick, argues that a set of ethical principles is needed to assure users that the news they consume is trustworthy. She emphasizes the urgency of this issue, pointing out that without such regulations, fair and impartial elections might be hard to guarantee. Policymakers, according to Singh, need to act now as time is running out.
In the US, efforts are already underway to regulate the use of AI in politics. Congresswoman Yvette Clarke has proposed a law change that would require political adverts to disclose when they contain AI-generated material. This underscores the growing belief among some politicians that transparency is crucial when it comes to the use of AI in political campaigns.
Big tech's steps toward AI content regulation
Recognizing the potential risks, some of the world's largest AI companies, including Amazon, Google, Meta, Microsoft, and OpenAI, have agreed on implementing new safeguards. One of these is the use of watermarks for AI-generated visual and audio content. This agreement came during a meeting with President Joe Biden, highlighting the importance that leaders and tech giants alike are placing on addressing this issue.