Artificial Intelligence in Rage

An in-depth look at how AI and social media amplify rage and radicalization, transforming media and challenging democracy and societal safety.

Silhouetted figures look into a dark abyss, symbolizing AI-driven echo chambers and fear.
Why are people peering into the digital abyss?

In recent months and years, we have witnessed and felt an increase in radicalization across the globe. A radicalization happening at a speed that people western societies may not have experienced for decades - hate speech and “alternative facts” popping up unfiltered in our news feeds on a daily basis. The world seems to be falling apart, as the opinions of a few quickly turn into the opinions of many, and those ‘many’ become anxious followers.

We scroll through feeds on X, Instagram, TikTok, or Telegram, and the deeper we go, the more we sink into a lake of darkness. A darkness that responds to us in a way that makes us feel sad, even when it’s not directly related to us.

This fear and deep lake of darkness starts blinding and disillusioning us - bringing every conflict, war, and crime closer to our own souls. Our souls, our psyche, starts to darken. Our once beautiful and positive minds shift their focus towards these distressing events, even though we wish to keep them away from us.

Fear is the helper of evil

Jonathan Haidt discusses the rise of smartphones, social media, and overprotective parenting in his 2024 book 'The Anxious Generation'. He is pointing out that this way of upbringing has led to an increase in mental health issues among children and adolescents. He refers to studies that highlight the shift from play-based childhoods to phone-based ones, which dramatically contribute to anxiety, depression, and other mental health challenges affecting the youngest generation.

His main claim is that parents are overprotecting their children in the real world but failing to protect them in the virtual world. In his more than 30 years of studying moral psychology, he has come to see this as one of humanity's greatest problems:

"We are too quick to anger and too slow to forgive."

What Jonathan Haidt analyzes in The Anxious Generation is not only true for children and their parents - it is also true for democratic societies that allow their people to become isolated as they immerse themselves in social media and content streams that make them feel lonelier and more depressed the deeper they dive into those dark lakes of information.

Fear will eat the soul

We haven’t been in times of the early 20th century, and many that have been there are not with us anymore. We saw entire generations tapping into the trap of hate speech, filled with rage and anger before and during World War II, and we see the same patterns again arising on the horizon at dramatic speed. This time, it’s not just one authoritarian leader and his anti-democratic followers that are spreading  -  quite non-targeted  -  information via the “Volksempfänger” and printing pamphlets with racist and “raging” messages to the public. There is no need to underline where this ended up, but it's important to underline that the radio was a pretty new way of delivering information (first radio show in 1909 and commercialization in 1919). I recommend reading the radio theories by Brecht as part of his essay "Der Rundfunk als Kommunikations-Apparat" ("Radio as a Means of Communication") written in 1927.

One quote from Brecht's essay, translated from German:

"Radio must be transformed from a distribution apparatus into a communications apparatus. The radio could be the finest possible communications apparatus in public life, a vast system of channels. That is, it could be so, if it understood how to receive as well as to transmit, how to let the listener speak as well as hear, how to bring him into a network instead of isolating him. […] Radio must make exchange possible."

At that time, it was clear that humans were communicating with humans, as technology didn’t offer a different way. Still, it was one of the first mass media formats that, by the height of the Third Reich, almost everyone had positioned at the center of their living rooms, and so many lives were centered around those radio waves receiving wooden boxes.

While radio was a unifying force in its time, bringing families together around a shared experience, it also demonstrated how a powerful medium could be wielded for manipulation and propaganda. The one-way flow of information from broadcaster to listener created a dynamic where narratives could be controlled and dissenting voices silenced. As technology evolved, so did the nature of media, transforming from centralized mass communication into a fragmented, personalized experience. This shift not only changed how we consume information but also how we engage with it, setting the stage for the modern media landscape where everyone is both a consumer and a creator of content.

From one voice to a million echoes

Steve Bannon, the chief campaign officer in Trump's election of 2016, led the shift in perception of mass media towards “mainstream media” and kicked off a debate about whether mass media is the place to find independent "truth" and "facts", which put social media even more at the center of gathering "real" political information. The framing of "alternative facts" by Kellyanne Conway in January 2017 flipped the narrative and further discredited mass media as a source for truth.

In today's media landscape, receivers have turned into contributors, and most of us have centered our lives around the smartphones in our pockets, just as people 80 years ago centered theirs around the radio. The difference here is that listening to the radio was often a family gathering or a social event, as people were rarely isolated with their radios at home.

With smartphones, we are now traceable and in continuous exchange of data with server farms from global players and governments. While being in control of your own information and maintaining awareness of the data you share is important, the speed at which the information age has evolved often leaves even the most informed among us struggling to keep up. The digital world is complex, and the paths our data take are not always clear - sometimes, they are deliberately obscured. This reality makes it crucial for institutions, governments, and technology companies to ensure transparency and accountability, while also providing individuals with the tools and knowledge they need to navigate safely.

What is certain is that social media is a mass phenomenon - with one key difference: the information is highly personalized for its recipients, narrowing down profiles that allow one out of a million users to be targeted and influenced based on collected data. We’ve become accustomed to being targeted with personalized newsletters, social media posts, or scrolling through our Instagram feed, falling in love with another cute dog video or a pair of sneakers that perfectly match our taste.

We probably won’t adopt the dog, but we linger on the video, watch the clip, and somewhere along the way, we end up buying that nice pair of sneakers - and the matching outfit - only to realize that we’ve ended up wearing the same white sneakers as everyone else.

Because if we can’t be a unicorn startup founder, at least we can dress like one.

The good, the bad, the ugly of personalization

Narrowing down the concept of personalization - let’s take sneakers as an example. When a startup founder, musician, or professional influencer wears a pair that we think is cool, it affects us. It makes us want them, and if we’re easily influenced, we might even buy them on impulse, stepping into the marketing trap.

This concept is loved in marketing, and yes, we as digital professionals use it for any product you can imagine. Is it bad? No. Buying a pair of sneakers that is ethically produced doesn’t harm anyone.

But is it a human deciding that you are the chosen one to see that sneaker ad?

Indirectly, yes. Someone configured an ad based on your personal preferences, and now it’s targeted at you. Voilà - that’s personalized marketing.

Does a human marketing manager manually decide that you specifically will see the ad at 4:47 PM on a Saturday, on your way to the gym?

No. That decision is made by predictive analytics and AI models that analyze user behavior patterns to predict the best time for engagement. These models match your profile, desires, and situation to the perfect product - and track whether you show interest, hoping you’ll become the next lucky owner of those white sneakers.

But here’s the real point: If it’s possible to understand someone’s preference for sneakers based on the content they interact with, the same concept can be easily used to manipulate political beliefes.

In 2018, the Facebook–Cambridge Analytica data scandal was analyzed by Kate Crawford in "Atlas of AI" (2021, Yale University Press).

Here’s a quick refresher: The 2018 Facebook–Cambridge Analytica scandal exposed how personal data from up to 87 million Facebook users was harvested without consent using a personality quiz app. The data was used by Cambridge Analytica, a political consulting firm, to create psychographic profiles and target individuals with tailored political ads, most notably in the 2016 U.S. presidential election and the Brexit referendum (June 23, 2016). The controversy raised concern about the role of targeted misinformation, data misuse, and voter manipulation in democratic processes.

The scandal led to significant consequences, including Facebook CEO Mark Zuckerberg testifying before Congress, a $5 billion fine imposed by the FTC (Federal Trade Commission), and the implementation of stricter data protection laws worldwide. Activists like Max Schrems further amplified these discussions through landmark legal battles, including the Schrems II ruling, which invalidated the EU-US Privacy Shield and reinforced the need for robust data privacy standards. 

Another notorious case of social media-driven influence - one that Yuval Noah Harari explores in his book "Nexus" (2024, published by Penguin) - is the Myanmar conflict. It remains one of the most thoroughly researched examples of how social media can directly fuel violence and ethnic tensions.

Facebook played a crucial role in the Myanmar crisis by serving as a platform for hate speech, disinformation, and incitement to violence against the Rohingya Muslim minority, contributing to genocide and mass displacement. A 2018 UN report found that Facebook amplified hate speech, largely unchecked, while Myanmar’s military and nationalist groups used it for propaganda. Facebook later admitted its failure and removed military-linked accounts, but the damage was already done.

As a civilization, we must recognize the danger: AI-driven algorithms amplify divisive content, making social media a powerful tool for manipulation and conflict on a global scale.

Turning people into weapons

Targeting people to buy sneakers is one thing; targeting people to commit acts of violence - acts they might never have considered until they were motivated by AI-driven algorithms - is something we now see happening across the globe.

In recent years, young people, primarily men, have turned themselves into weapons, driving trucks into crowds or attacking others with knives or firearms. Some examples include the Nice truck attack (France, 2016), Charlottesville attack (US, 2017), London Finsbury Park Mosque attack (UK, 2017), Waukesha Christmas Parade attack (US, 2021), Hanau attack (Germany, 2020), Magdeburg Christmas Market attack (Germany, 2024), Munich Union Rally attack (Germany, 2025) - and just days before I publish this article, an attack in Villach, Austria (2025).

The pattern across all these attacks are strikingly similar: the perpetrators were radicalized individuals (and frankly, their specific ideology - whether far-right, Islamist, far-left, Christian, or otherwise - doesn't matter, because the mechanism is the same).

More importantly, these men belonged to a similar age group, often lived in isolation, and were radicalized and inspired through social media, where content fed, stimulated, and rewarded their descent into extremism.

It is true that each of these men acted on their own free will - they had the choice to either carry out their violent plans or step back and reflect on whether their actions aligned with the law and social norms. 

But the question we, as a society and our political leaders, should be asking is: What turns people into weapons capable of killing dozens of innocents?

Is it ideology, nationality, or is it the deep isolation that comes from being trapped in an algorithm-driven echo chamber of rage and radicalization - a space where an AI system, programmed solely to maximize user engagement (since rage-inducing content works best), pushes them further toward extremism? Or are geopolitical actors intentionally exploiting these algorithms to destabilize democratic nations?

These questions are not merely rhetorical; they demand answers and actions from policymakers, tech companies, and society as a whole. If we are to prevent further tragedies, we must confront the uncomfortable reality that technology, when left unchecked, can become a tool for radicalization and violence. It is our shared responsibility to ensure that digital spaces foster connection and understanding, rather than fear and division.


Book recommendations that inspired this article


Recommended articles on the web to explore further

Bertolt Brecht and the radio
When radio in Germany celebrates its one hundredth anniversary, there is one guest who definitely deserves to be invited to the party: Bertolt Brecht. Andreas Ströhl has written about the role that Brecht played in shaping radio as we know it today.
Report: Facebook Algorithms Promoted Anti-Rohingya Violence
Amnesty International claims Meta ignored warnings of human rights risks in Myanmar, implementing inadequate safeguards
Facebook–Cambridge Analytica data scandal - Wikipedia
How Artificial Intelligence Influences Elections and What We Can Do About It
2024 will be the first election year to feature the widespread influence of AI. Policymakers at the state and federal level -- as well as actors in the private sector -- must step up to address the unique challenges AI creates for our democracy. Here’s how:
The Role of the Internet and Social Media on Radicalization: What Research Sponsored by the National Institute of Justice Tells Us | Office of Justice Programs
This report examines research funded by the National Institute of Justice, Domestic Radicalization to Terrorism program, focusing on the role of social media and the internet in influencing radicalization and violent extremist dynamics.

Article updated on February 21st, with typos corrected and sentences improved.