Disinformation and Non-State Actors

Disinformation has become an increasingly prevalent issue globally, influencing national and international politics as well as everyday life. But what is disinformation? And what non-state actors and platforms are spreading disinformation?

What is disinformation?

The American Psychological Association defines disinformation as “false information which is deliberately intended to mislead—intentionally misstating the facts.” Disinformation and misinformation are commonly confused, however, they can be distinguished through whether they are intentionally misleading (disinformation) or just getting the facts wrong (misinformation).

What is the reasoning behind disinformation?

In order to understand who is spreading disinformation, looking at the rational behind how disinformation is spread is beneficial. Watts (2020: 2) explains that in order to be a smart disinformation “peddler”, spotting things they know the audience would want to hear, then designing disinformation to engage them, allowing actors to build a “narrative” or feeding narratives to audience to confirm their beliefs. This can be for ‘clickbait’ or for furthering political agendas, with misinformation and disinformation overlapping causing an amplification off of each other (Watts. 2020: 2).

Who are the non-state actors spreading disinformation?

The most prominent platform for disinformation by non-state actors is Twitter/X, with TrustLab (2023: 51) finding that Twitter had the largest ratio of disinformation actors and YouTube having the smallest. Focusing more closely, numerous media sources have reported across the year of 2024 to 2025 the owner of the social media platform, Elon Musk being a source of mis/disinformation (BBC, 2025; Bruggeman, Romero, 2024; Dang, Singh, 2024; Ingram, 2024; Leingang, 2024).

Kirdemir (2019: 6) expands on this issue, arguing social media promotes competition, with rivalling actors competing for “power and influence”. The disinformation produced from this includes;

“trolls, bots, fake-news websites, conspiracy theorists, politicians, highly partisan media outlets, the mainstream media, and foreign governments.”

In the context of foreign influence, the most common actors overall serving foreign influence were found to be private companies (non-state actor), media organisations (non-state actor), foreign government officials (state actors) and intelligence agencies (state actors) (Kirdemir, 2019: 3). It was noted that common strategies amongst these actors was defamation, persuasion and polarisation, with tactics including creation of original misleading content, amplifying existing material, “hijacking” conversations and distorting facts in relation to proportions in time (Kirdemir, 2019: 3).

What platforms is disinformation spreading on?

The platforms that are primarily used for spreading disinformation by both state and non-state actors, social media has consistently been associated with disinformation, with the nature of these platforms increasing reach and engagement of problematic content (Echeverría, et al., 2024: 141). This increase of problematic content increases the risk for democracies, especially within countries where social media is widely used but not regulated, such as Brazil (Echeverría, et al., 2024: 141). Further examples of non-state actors engaging in the spread of disinformation include trolls, hired trolls, hyper partisan websites and conspiracy theorists (Echeverría, et al., 2024: 141).

Rotondo and Pierluigihas (2019: 6) indicated that often cyber election interference is carried out by non-state actors, with them acting on instructions of “foreign power” in order to interfere with the “target state’s political system”, with the methods of “trolls” posting fake news and socially divisive content on social media. The GAO (2024) has further argued that tactics to create or spread disinformation, including employing foreign actors behind fake social media accounts and websites, hold connections to foreign governments, indicating a link between state and non-state disinformation.

Why AI is a problem for disinformation

AI can be seen as problematic through it exacerbating the challenges of mis and disinformation, with AI tools making it easier for anyone to create fake images and news which are hard to distinguish from accurate information (Shin, 2024). It was found that in 2023, the number of AI-enabled fake news sites increased tenfold (Shin, 2024), with AI’s affordability and accessibility useful for non-state actors in enabling them to overcome resource and expertise disadvantages due to operating outside a state structure (Kreps, Li, 2022). Kreps and Li (2022) have identified non-state groups such as terrorist, hacking, and drug trafficking groups to be the most prominent examples of groups using AI to spread disinformation.

The role of disinformation is becoming increasingly more significant, with platforms such as Twitter and AI technology exacerbating the creation and spreading of disinformation.

To read more research by this author on disinformation please see: https://committees.parliament.uk/writtenevidence/137997/pdf/