AI tools in hybrid warfare - A double-edged sword

The synergy between AI-generated content and social media amplification presents a particularly potent threat.

The synergy between AI-generated content and social media amplification presents a particularly potent threat.

30 May 2023

As technology advances, so does the potential for its misuse. Large language machines (LLMs) and image generators have become extremely popular, seemingly overnight, with the launch of ChatGPT 3 in November 2022. Since then, companies and individuals have scrambled to figure out uses for them for a variety of purposes or sectors.

By Kristian Bischoff, Europe and Russia analyst

LLMs continue to be upgraded and refined by their creators, and Adobe recently released their “Generative Fill”- AL tool for Photoshop, highlighting how the tech companies are competing fiercely to have the best possible software. However, these tools also have the potential to be used for malicious purposes, particularly in relation to hybrid threats.

Threat

Image meant to show an attack at the Pentagon - very likely AI generated

On 22 May 2023, news spread quickly about an attack on the Pentagon. This story was disseminated along with an image via a verified Twitter associated with Bloomberg News and caused significant anxiety on various stock and investment markets in the US. However, as it would turn out, the user sharing the story was a fake, the story was made up, and the image is very likely generated by an AI image tool.

Hybrid threats and hybrid activities refers to the use of a combination of conventional and unconventional tactics by a state or non-state actor to achieve their objectives, often in a context of international competition, tensions, and war. As seen in multiple examples in recent years, such hybrid activities can include the use of disinformation campaigns to influence public opinion, sow discord, and create confusion. It doesn’t even need to be hybrid activities; it may just as well be in internal political campaigns, or the tools may be employed by activists.

Herein lies the issue of AI-tools, because as technology advances, so does the potential for its misuse. One of the most significant concerns is the potential for AI-generated disinformation to manipulate public opinion. For example, an image generator could create a realistic-looking image of a political figure engaged in illegal activity – or an attack on the Pentagon - even if that activity never actually occurred. By targeting vulnerable individuals or specific demographics with tailored narratives, adversaries can exploit pre-existing divisions within societies and amplify distrust, ultimately eroding social cohesion. Some narratives may be used to target financial markets or influence decision makers.

The synergy between AI-generated content and social media amplification presents a particularly potent threat. If a threat actor combines the generative capabilities of LLMs and image generators with botnets or scores of fake social media accounts, they can rapidly disseminate disinformation across platforms, creating the illusion of widespread support for a particular narrative. This can quickly saturate the information space with one narrative, drown out opposing viewpoints, manipulate public sentiment, and sway public opinion. Like in the Pentagon case, if a narrative is pushed as part of a larger effort, including more fake sources or additional images from different angles etc., the impact may be very considerable and difficult to debunk initially.

The speed and scalability of AI-driven disinformation campaigns mean that even a small team can create an overwhelming volume of content, making it difficult for authorities and traditional fact-checking mechanisms to keep up.

Mitigation

To address these challenges, it is imperative that governments, private sector entities, and civil society collaborate to develop robust countermeasures. For the Social Media giants, enhancing AI capabilities for content verification and detection of AI-generated disinformation should be a priority. Advancements in machine learning algorithms can aid in identifying patterns and anomalies that indicate the presence of AI-generated content, allowing for more effective detection and mitigation.

Additionally, promoting media literacy and critical thinking skills among the general population is crucial. By equipping individuals with the tools to recognize and evaluate disinformation, they can become more resilient against manipulation attempts. Educating the public on the tactics employed in hybrid warfare and raising awareness about the existence of AI-generated disinformation campaigns are essential steps toward mitigating their impact.

Ultimately, the worse-case scenario is that the mere existence of these tools, and the knowledge of their employment against regular people, will ultimately lead to an erosion of trust in almost every type of information shared online.

In conclusion, large language machines and image generators have the potential to be powerful tools for good, but they also have the potential to be used for malicious purposes. As we continue to develop and use these tools, it is essential that we remain vigilant and take steps to mitigate the risks associated with their misuse.

LEARN MORE:

Want 24/7 access to unrivalled intelligence? Get a free trial of the Risk Intelligence System:

Sign up to get a two week unlimited free trial, providing access to all incidents, assessment reports and features from vessel or desktop. Get in touch now:

 

Previous
Previous

Northern Sea Route: The rising Chinese influence and Russia's concessions

Next
Next

The Saudi-Iranian Agreement or the commodification of commitment