As elections face mounting disinformation campaigns from various foreign actors, artificial intelligence (AI) poses a new threat. AI's capability to generate copious amounts of text swiftly and efficiently makes it a fitting tool for online propaganda, thereby escalating the potential for election interference.
Emergence of AI in election interference
The landscape of foreign interference in elections has significantly evolved and become more sophisticated. It all started in 2016 when Russia launched disinformation campaigns via social media targeting the U.S. presidential election. Over the years, countries like China and Iran have followed suit, using social media platforms to sway foreign elections, including those in the U.S. The latest addition to this escalating threat is the advent of generative AI and large language models. These AI tools can generate vast amounts of text on any topic, from any viewpoint, and in any tone swiftly and efficiently, making them a powerful tool for propagating online propaganda.
In the coming months, elections will be held in various democracies across the globe. These elections are of high interest to countries that have previously conducted social media influence campaigns. As examples, China has shown significant interest in countries like Taiwan, Indonesia, and India, while Russia has shown interest in the U.K., Poland, and the EU. The United States, of course, is a focal point of interest for all. With the advent of AI tools like ChatGPT, the cost of producing and distributing propaganda has significantly decreased, allowing smaller players to also get involved in influencing foreign elections.
Challenges in running disinformation campaigns
According to representatives from various cybersecurity agencies in the U.S., interference in the 2024 elections is highly anticipated. Apart from the usual suspects – Russia, China, and Iran – a significant new threat is expected to emerge in the form of 'domestic actors'. Despite the evolution and advancement of AI technologies, running a successful disinformation campaign involves more than just generating content. The distribution of this content is a major challenge. Propagandists need to create a series of fake accounts for posting their content and boosting it into the mainstream for viral spread. Companies like Meta are increasingly proficient at identifying and taking down such accounts, making the distribution aspect more challenging for those running disinformation campaigns.
Tactics evolution and defense strategies in election interference
In the realm of election interference, the tactics deployed by countries like Russia and China have been evolving and are likely to be far more sophisticated than those used in 2016. An important defensive strategy to counter new disinformation campaigns involves the ability to recognize and catalog these tactics. In the computer security world, sharing the methods of attack and their effectiveness has been instrumental in building strong defensive systems. This same logic applies to disinformation campaigns. The more researchers study the techniques being employed in other countries, the better they can defend their own.
While it is true that there have been democratic elections in the generative AI era where significant disinformation issues were not reported, it doesn't mean the threat is non-existent. It is crucial to be prepared and vigilant for future threats. Understanding what to expect can help in developing better strategies to combat the potential problems that may arise from AI-powered disinformation campaigns. The sooner we can anticipate these threats, the better equipped we will be in dealing with them.