Russian-Fuelled Propaganda in the Balkans: Teaching ChatBots How to Lie
Kremlin-aligned actors are exploiting instability in the Western Balkans in an attempt to populate Artificial Intelligence with false information that will feed their narratives. An AI prompt on the 1999 NATO bombing shows how well Russian propaganda has adapted to manipulating AI.
Nowadays, whenever the Kremlin is not firing rockets, it is uploading falsehoods online, transforming the battle for the Balkans into a battle of bits and bytes.
The sprawling propaganda ecosystem dubbed Project Pravda, also known as “Portal Kombat” and Operation Doppelgänger, are deliberate tools of Russia’s hybrid warfare, producing millions of questionable news articles.
The aim is to shape what artificial intelligence learns, then feeding those outputs back into fragile media spheres. In Southeastern Europe, where EU integration, regional security, and neighborhood peace hang in the balance, that matters a lot in shaping opinions.
The Western Balkans is a geopolitical powder keg: a region of overlapping loyalties, unresolved disputes, and legitimate grievances, where foreign powers have long competed for narrative dominance. Kremlin aligned actors exploit precisely these weaknesses: Kosovo’s status, NATO’s 1999 bombing, Bosnia’s constitutional deadlock, and the culture‑war tropes that polarise societies from Belgrade to Skopje.
Analysts have tracked Russia‑linked networks using local‑language clones, Telegram channels, and artificial news sites to recycle wartime disinformation for Balkan audiences. At the same time, EU officials warn that disinformation is a component of a broader ‘hybrid war’ designed to destabilise aspiring member states and slow them down on their path towards Europe.
AI in contemporary hybrid media systems brings both advantages and risks. Many digital platforms choose to block data catches, because of copyright, and to prevent an overload of systems by algorithm designs that harvest data in order to train Large Language Models, LLMs.
An LLM is an AI system that consists of data that is trained to process and generate human-like language, which allows it to translate and summarize large amounts of documents and articles, to generate fast answers, and even write software code. The system is used on a daily basis, mainly because of the popular chatbots such as OpenAI’s ChatGPT or Google’s Gemini.
However, in the era of digitalization, there are many initiatives that offer open data for precisely the same reasons, to be caught and harvested by platforms that design LLMs for the purpose of training AI.
The good AI and the bad AI scenario has already started, and when it comes to national security, and combatting propaganda and fake news—which, in the case of the Western Balkans, generally comes from Russia—good and bad are not what people would normally perceive as such. When it comes to security, good AI is the one that allows for limitations and bad AI is the one allowing all sorts of information to flood the internet, allowing for a boom of Russian disinformation. This phenomenon is prominent in the Balkans.