LIM Center, Aleje Jerozolimskie 65/79, 00-697 Warsaw, Poland
+48 (22) 364 58 00

Evolving Threat: Foreign Election Interference and Artificial Intelligence

Evolving Threat: Foreign Election Interference and Artificial Intelligence

Evolving Threat: Foreign Election Interference and Artificial Intelligence

Foreign actors using artificial intelligence (AI) to influence elections is an evolving threat that continues to grow. The trend began in 2016, when Russia initiated a series of social media disinformation campaigns aimed at swaying the outcome of the US presidential election. Since then, other countries like China and Iran have used social media to interfere in foreign elections. Now, with the advent of generative AI and large language models, the capabilities of election interference have expanded.

Generative AI has the power to produce vast amounts of text on any subject from any perspective, making it a perfect tool for internet-era propaganda. While these technologies are still new, their impact on disinformation campaigns and their effectiveness are yet to be fully understood. However, as election season approaches in many democracies worldwide, it is likely that AI-powered disinformation will play a significant role.

In the coming months, several countries will hold national elections, including Argentina, Poland, Taiwan, Indonesia, India, the European Union, Mexico, the United States, and various African democracies. These elections are of great interest to countries that have previously engaged in social media influence operations. China focuses on Taiwan, Indonesia, India, and several African countries, while Russia’s targets include the UK, Poland, Germany, and the European Union at large. Naturally, the United States remains a prime target for many foreign actors.

AI-driven image, text, and video generators have already begun injecting disinformation into elections. As the cost of foreign influence decreases, more countries will have the means to engage in such activities. Tools like ChatGPT have significantly reduced the production and distribution costs of propaganda, making it accessible to a larger number of nations.

Addressing AI-powered disinformation campaigns requires not only monitoring content but also identifying and tracking distribution channels. Companies like Meta have become better at finding and removing fake accounts, but propaganda outlets have shifted from platforms like Facebook to more challenging-to-identify platforms such as Telegram and WhatsApp. Additionally, newer platforms like TikTok, controlled by China, are ideal for short, provocative videos that can be easily produced using AI.

Generative AI tools enable the creation of persona bots, which appear to be normal social media accounts engaging in everyday posts but occasionally espouse or amplify political messages. While each individual bot may have minimal influence, when replicated on a massive scale, their collective impact becomes significant.

As foreign actors continue to refine their tactics, it is crucial to develop the ability to recognize and respond to new disinformation campaigns. Fingerprinting these tactics and cataloging them promptly is essential for effective countermeasures. The rise of generative AI has only amplified the sophistication of social media-based disinformation campaigns, making it even more critical to stay vigilant and proactive in defending democratic processes.

Sources:
– Original article: [Source]
– Image source: [Source]
– Generative AI forums: [Source]
– Social media networks’ efforts against disinformation: [Source]

Tags: , , , , , , , , , , ,