
A developer has harnessed OpenAI technologies to create an AI-powered disinformation tool, highlighting the potential risks of AI misuse in mass propaganda. The project, known as CounterCloud, took only two months to complete and cost under $400 per month to run.
Ease and affordability of AI propaganda
An anonymous developer, pseudonymously known as Nea Paw, has used OpenAI technologies like ChatGPT to create an alarming AI 'disinformation machine' within a short span of two months. This project, called CounterCloud, costs less than $400 monthly to operate, underlining the disturbing ease and affordability with which mass propaganda can be churned out. Paw's goal was to highlight this potential danger, to increase public awareness of how simply AI technologies can be harnessed for ill intent.
Mechanics of AI disinformation creation
To generate disinformation, Paw fed ChatGPT with 'opposing' articles and directed it to draft a counter article. The AI responded by generating several versions of the same article, each presented in different styles and perspectives. It achieved this by crafting fictitious stories and historical events, thereby creating doubt about the accuracy of the original articles.
To enhance the authenticity of the fake articles, Paw created fraudulent journalist profiles and comments, and integrated audio clips of newsreaders narrating the AI-written articles. The system was also programmed to interact on social media - it liked and reposted posts that matched its narrative and composed 'counter' tweets to contradicting messages, all part of a scheme to further legitimize its disinformation.
Educating public with CounterCloud
The creators of CounterCloud have resisted releasing the model on the internet due to the potential harm it could cause by actively disseminating disinformation and propaganda. They believe that by revealing the workings of CounterCloud, they can play a more positive role by educating the public about the inner workings of such systems and hence, their potential misuses.
Sam Altman, CEO of OpenAI, has expressed fears about the potential threat that AI poses to our democratic systems. Specifically, he's concerned about AI's capacity to accelerate the production and dissemination of online disinformation. Altman warns that 'personalized 1:1 persuasion, combined with high-quality generated media, could become a formidable force', implying the potential misuse of AI technologies in manipulating public opinion.