Algorithmic Amplification of Negative Discourse as a Systemic Risk

How does attention-economy-driven algorithmic amplification of conflict-driven and negative-emotional communication distort public discourse? And does this distortion constitute a systemic risk under the Digital Services Act (DSA)? This blog article refers to our previous blogpost on Platform Badges for Civic Communication, explains why such interventions are needed, and outlines how they could address these systemic risks.

by Jan Rau

Digital platforms shape political communication through ranking, recommendation, and engagement-based optimization; these systems do not merely mirror public debate but actively determine which forms of political expression receive attention at scale. Within this environment, conflict-driven and negative-emotional communication tends to occupy a structurally advantaged position. In our recent paper Platform Badges for Civic Communication, we argue that, under the DSA, this dynamic gives rise to systemic risks to civic discourse as understood in Articles 34 and 35.

Systemic Risk under Articles 34 and 35 DSA

Articles 34 and 35 DSA require very large online platforms to identify, assess, and mitigate systemic risks arising from the functioning and design of their services. Particularly relevant is the systemic risk outlined in Art. 34(1)(c) DSA, which covers “any actual or foreseeable negative effects on civic discourse and electoral processes, and public security.” This is the provision we primarily refer to when outlining the harmful impact of the structural overrepresentation of conflict-driven, negative-emotional communication and its societal effects.

The Digital Attention Economy

Central to this argument is the intensification of the digitally mediated attention economy, in which communicators – both established and emerging – compete for limited human attention amid an ever-increasing volume of information. To maximize engagement, communicators and platforms employ digital tracking and analytics systems that fine-tune content to align with user behavior and algorithmic preferences.

Antagonism and Outrage as Structurally Privileged Forms of Communication

Empirical research shows that content emphasizing group identity, emotional conflict, and antagonism – “us versus them” framing rather than solution-oriented debate – achieves especially high visibility online. Rather than echo chambers alone, sustained and often unmoderated confrontation across political groups, supported by interconnected digital spaces, underpins contemporary polarization.

Because conflict-laden, negative-emotional communication is effective at capturing attention and mobilizing support, many actors deliberately amplify hostile, antagonistic discourse. For authoritarian actors in particular, such styles are central to communicative success. In this context, digital platforms are especially problematic: driven by attention-maximizing logic, ranking and recommendation systems frequently reinforce engagement-generating, conflict-oriented discourse. Over time, this produces a structural bias in favor of antagonism, escalation, moral polarization, and emotional outrage.

Societal Implications

Although the amplification of emotionally charged and conflict-driven content has received comparatively little attention in platform governance debates, its implications for democratic societies are substantial. A growing body of evidence links its prevalence to hyperpolarization, including the erosion of democratic norms and mutual toleration. Hyperpolarization, in turn, can foster political gridlock, weaken collective responses to societal challenges, create openings for authoritarian encroachment, and escalate into political violence. Given these effects and the success of certain actors in exploiting digitally amplified antagonistic discourse, the amplification of such content poses a systemic risk to civic discourse, electoral integrity, and public security, as articulated under Article 34 DSA.

The Importance of Political Conflict

It is important to underline that political conflict, emotional expression, and polarization are not inherently harmful but are central to pluralistic democracy, particularly for marginalized groups seeking visibility and collective identity. Attempts to suppress conflict through enforced consensus risk obscuring underlying injustices rather than addressing them, and efforts to curb polarization without engaging its roots may simply mute legitimate dissent. However, while political conflict in itself is not problematic (quite the opposite, it is necessary and intrinsic to politics), its constant digital overrepresentation has become harmful, fostering hyperpolarization and opening the door to authoritarian actors, as outlined above.

Risk Mitigation Through Platform Badges

Overall, the structural overrepresentation of conflict-driven, negative-emotional communication – and its documented societal effects – can indeed be understood as constituting a systemic risk to civic discourse, electoral processes, and public security, thereby requiring appropriate risk-mitigation measures.

Measures such as the proposed platform badges for civic communication can serve as effective risk-mitigation mechanisms. These badges operate as governance-by-design interventions: they create positive incentives for communicators who adhere to democratic communication norms and, in return, receive increased visibility on platforms. Rather than restricting content, badges aim to rebalance attention distribution by selectively amplifying norm-compliant communication. This shifts the regulatory focus from isolated content-moderation decisions to the structural communicative effects produced by platform architectures and design choices – effects that ultimately shape which forms of political expression dominate the digital public sphere.

Conclusion

Conflict and emotion are not incompatible with democratic communication. However, when platform systems consistently privilege conflict-driven negative emotionality, they introduce a structural bias that reshapes civic discourse at scale and to a problematic degree. Recognizing this dynamic as a systemic risk under Articles 34 and 35 DSA and advocating for appropriate risk mitigation measures – such as platform badges – is therefore essential.

“He Just Does Everything Right. He’s Simply Smart” – Young People’s Perspectives on AI

“AI and Me – In an Artificial Relationship” is the motto of Safer Internet Day 2026. AI applications are no longer used just for homework, but also as advisors and conversation partners. This blog post takes a look at young people’s experiences with AI and shows that schools and parents often lack opportunities to discuss the role of AI in our daily lives. Safer Internet Day offers a good opportunity to start this conversation.

by Claudia Lampert & Kira Thiel

The use of generative AI, particularly ChatGPT, has become a common practice among young people online. According to the most recent EU Kids Online survey, which included 17 European countries, 72 percent of 13- to 17-year-olds have used generative AI within the past month (Staksrud et al., 2026). The JIM Study 2025 found that 91 percent of respondents use at least one AI application (62 percent in 2024) (Feierabend et al., 2025). Given these figures, the question is no longer whether young people use AI but how often, in what form, and for what purpose. (Feierabend et al., 2025, p. 61). While generative AI was initially tested out in a playful manner, it has now become an everyday tool for many, used in both school and leisure contexts (Feierabend et al., 2025).

As part of an international comparative study by the EU Kids Online research network, a closer look was taken at the role generative AI plays in young people’s everyday lives, the purposes for which they use generative AI applications, their knowledge and skills, and the extent to which their use is accompanied, supported, or regulated (Staksrud et al., 2026). For this purpose, qualitative interviews were conducted in 15 countries. At the Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI), these interviews were conducted between April and August 2025 with 15 young people between the ages of 13 and 18 (Thiel et al., 2026).

AI Is an Integral Part of Young People’s Everyday Lives

For young people, generative AI is no longer an abstract topic of the future, but rather a part of their everyday lives. All of the young people surveyed had experience with AI-supported applications, either at school or through friends. However, it is clear that their usage is usually limited. The focus is primarily on ChatGPT, while other AI applications are used much less frequently. How often AI is used varies greatly and depends on the benefits that young people see for themselves.

In a school setting, AI is primarily used as a research tool and to create or revise texts and presentations. But even outside of school, young people are increasingly turning to generative AI to solve everyday problems and answer personal questions. In these situations, AI acts as a communication partner that is always available to provide feedback, develop ideas, and assist with decision-making.

“Like Google, Only More Precise”

For young people, generative AI differs significantly from previous digital offerings. Its interactive and communicative properties open up new forms of human-machine interaction, shaping how young people perceive AI and the properties they attribute to it. Ideas about AI are clearly linked to specific applications and contexts of use. They often arise in comparison with familiar technologies and in contrast to humans.

Chatbots like ChatGPT are often compared to search engines like Google. A particularly appreciated feature is that the answers are precisely tailored to the query, and the information is clearly provided without lengthy searches. The ability to respond directly to answers, ask follow-up questions, and refine results step by step is considered a key advantage of generative AI over traditional search engines.

“It’s Like Asking Someone for Help or Asking Your Teacher”

However, young people draw clear boundaries when comparing AI to humans. Unlike humans, AI has no body, feelings, personality, or experiences of its own. They see AI more as a technical system that is available 24/7 and knows “an abnormal amount,” but is not really present. At the same time, some describe AI as “omniscient” and more intelligent than humans.

“No, it doesn’t have feelings. What do you want with it? Besides, it’s not — what do you call it? Tangible. It doesn’t even have a face. You talk to your cell phone.” (Roxy, age 13)

It’s remarkable that young people are keenly aware of how closely AI mimics human communication. This behavior evokes various responses: some find it practical or helpful, while others find it irritating, alienating, or even slightly creepy. Regardless of how it is perceived, AI is seen as a communication partner who listens, responds, and assists with minor or major decisions.

“Artificial intelligence doesn’t judge you, so it will never do that. And […] I think it just understands you better. And besides, as I said, it will never judge you and is always so nice to you and stuff.” (Nayla, 16 years old)

In some cases, relationships resembling friendships are even described, especially where young people receive little attention in everyday life and feel understood by AI.

Little Consideration of Risks

The interviews with young people show that generative AI is perceived as a hybrid offering whose significance and function vary depending on the context of use and individual needs. Some young people seem more capable than others of exploiting the potential of AI technologies. However, it remains unclear how they address potential risks, such as disinformation, deepfakes, and false or incorrect information. Their reports suggest that they are exploring AI’s possibilities largely on their own and that neither home nor school provides a space to discuss the importance of AI in our everyday lives. Safer Internet Day is a good opportunity to discuss this issue together.

References

Feierabend, S., Rathgeb, T., Gerigk, Y., & Glöckler, S. (2025). JIM-Studie 2025: Jugend, Information, Medien: Basisuntersuchung zum Medienumgang 12- bis 19-Jähriger [JIM Study 2025: Youth, Information, Media: Basic Study on Media Use Among 12- to 19-Year-Olds]. https://mpfs.de/studie/jim-studie-2025/

Staksrud, E., Mascheroni, G., Milosevic, T., Ní Bhroin, N., Ólafsson, K., Şengül-İnal, G., & Stoilova, M. (2026). European Children’s Use and Understanding of Generative AI. EU Kids Online 2026. https://doi.org/10.21953/researchonline.lse.ac.uk.00137132

Thiel, K., Lampert, C., & Memis, E. (2026). Generative KI aus Sicht von Jugendlichen. Eine qualitative Studie im Rahmen des Projekts „EU Kids Online“  [Generative AI from the Perspective of Young People. A Qualitative Study as Part of the EU Kids Online Project]. Hamburg: Verlag Hans-Bredow-Institut, February 2026. https://doi.org/10.21241/ssoar.108066

Weitere Links

To the research network EU Kids Online

To the German website of the research network

To the project website at HBI

Image: AI-assisted, created by Kira Thiel

Platform Badges for Civic Communication. Addressing Distorted Attention Distribution Logics on Digital Platforms

How can platforms address distortions in the digital attention economy without restricting free expression excessively? This blog post explores how new incentive structures can promote constructive communication on digital platforms and the potential of the Digital Services Act to facilitate such interventions.

by Tobias Mast, Jan Rau and Jan-Ole Harfst

Digital platforms have become essential infrastructures of contemporary public communication. Their algorithmic systems determine which actors and messages gain visibility and shape the conditions of democratic discourse.

However, due to incentives within the digital attention economy and the maximization of engagement, current platform architectures often favor content that is conflict-oriented, emotionally polarizing, or misleading. As a result, communication that aligns with democratic norms – such as accuracy, reciprocity, respect, and a problem-solving orientation – faces an uphill battle to achieve comparable levels of salience.

The Digital Services Act (DSA) introduces a risk-based governance framework that requires very large online platforms (VLOPs) to identify and mitigate systemic risks to civic discourse (Art. 34, 35 DSA). While discussions often focus on restrictive measures such as content removal or account sanctions, the DSA also explicitly allows platforms to adapt recommender systems and interface design as mitigation strategies.

A promising approach in this direction is the introduction of platform badges for civic communication. Badges represent a governance-by-design mechanism that creates positive incentives for communicators who commit to democratic communication norms. Rather than restricting content, platforms can support a more balanced distribution of attention by selectively amplifying norm-compliant communication.

The Basic Concept

Civic communication badges would be voluntary, user-facing, and visibility-relevant. Users opting in would commit to defined duties of care, for example:

  • avoiding the intentional or negligent dissemination of disinformation and misinformation,
  • adopting deliberative communication standards such as rationality, reciprocity, respect, and constructiveness.

These commitments are modest but meaningful. They reflect established norms in existing media self-regulation regimes (e.g., the German Press Code) and are adaptable to different platform environments.

In exchange, the platform integrates the badge as a ranking signal: users with badges receive increased visibility in feeds, recommendations, or replies. Visibility advantages remain proportionate and must not override all other ranking criteria.

Schematic representation of the quality badges for civil communication described in the text

Illustration: A governance-by-design mechanism to shape the attention distribution logic on digital media platforms and increase the salience of civic communication

Why Visibility Matters

Visibility is a central resource in digital communication. Current platform architectures often prioritize content that maximizes engagement regardless of communicative quality. This leads to:

  • disproportionate reach for disinformation;
  • high amplification of antagonistic and negatively emotion-charged content;
  • a fundamental disadvantage for communicators who aim for accuracy or deliberation.

The badge mechanism aims to partially offset this structural distortion by establishing an alternative logic of attention grounded in civic norms. Rather than countering harmful communication solely through deletion or demotion, platforms create a parallel approach that actively supports constructive content.

This does not eliminate political conflict – a normal and necessary aspect of democratic life – but helps prevent its escalation into persistent antagonism and hyperpolarization. Constructive communicators gain greater access to attention, whereas polarization-dependent strategies lose some of their algorithmic advantage.

Acquisition, Compliance, and Enforcement

To obtain a badge, users complete a brief onboarding process that explains the underlying norms. This may include short explanations, examples, and consequences of non-compliance. Access criteria (e.g., minimum account age or verification procedures) can reduce abuse from spam or coordinated manipulation.

Compliance with the duties of care can be assessed by building on and extending existing moderation processes. Platforms already rely on combinations of automated detection, human review, and user reports; these systems can be expanded to evaluate badge compliance. Assessments should be based on a user’s overall communication rather than individual posts to minimize false positives and reduce overenforcement.

Violations would result in a tiered response: warnings, temporary suspension of badge status, or, in severe or repeated cases, loss of the badge. All outcomes would fall under the DSA’s complaint and review mechanisms, ensuring procedural safeguards.

A Non-Repressive Intervention for a Distorted Attention Landscape

The badge mechanism does not seek to eliminate problematic communication, nor does it impose censorship-like restrictions on individual users or content. Instead, it introduces a corrective to the structural imbalance that currently favors antagonistic, sensational, or misleading communication. It offers an opportunity space for communicators who align with civic norms and introduces incentives for others to adopt similar standards.

In a digital environment where platforms’ architectures deeply shape public discourse, inaction would implicitly preserve the current distortive dynamics. Civic communication badges offer a moderate, structurally oriented intervention that strengthens democratic resilience by rebalancing visibility – not by restricting speech, but by rewarding communicators who support a functional digital public sphere.

A Risk Mitigation Measure Consistent with Fundamental Rights

The badge system proposed here can be understood as a risk mitigation measure under Articles 34 and 35 DSA, because it addresses the changes outlined there regarding algorithm design and moderation practices and counteracts the systemic risks mentioned: negative impacts on civic discussion, elections, public safety, and fundamental rights. However, as a risk mitigation measure, it must also adhere to fundamental rights considerations.

Unlike public authorities, platforms are not directly bound by fundamental rights. Nonetheless, Art. 35(1) DSA requires them to consider fundamental rights when implementing risk mitigation strategies. Therefore, the design must limit self-censorship or unfair effects while encouraging democratic communication. At the same time, platforms have their own rights (e.g., the freedom to operate a business), which generally give them more regulatory flexibility than public entities – supporting, from a legal perspective, the use of a badge as a way to handle “awful but lawful” content.

Four factors are especially important from a fundamental rights perspective:

  1. User autonomy: It is essential that users can make an informed choice, free from pressure, about whether they want the badge. The more de facto coercion such as significant disadvantages or social pressure the weaker the legitimacy of the user’s decision, because the opt-in is not truly voluntary.
  2. Neutrality toward opinions: It becomes especially problematic if badges are linked to specific views. Conversely, the level of fundamental rights protection is significantly weaker for clearly false statements of fact, particularly regarding affected third-party rights. Generally, it is better to focus on style and tone (e.g., aggressive, demeaning) rather than on content or positions.
  3. Equality of communicative opportunities: A visibility boost creates inequality between users with and without a badge. However, the badge is, in principle, accessible to everyone. It can also be argued that the badge compensates for existing algorithmic disadvantages affecting calmer and more factual (i.e., supposedly more “boring”) content.
  4. Proportionality: The badge is designed as a soft intervention because users can choose. At the same time, there is a trade-off: more autonomy (easy switching, quick reinstatement) may be less effective; greater strictness may be more effective but interfere more strongly with freedom rights. These tensions are typical of fundamental rights balancing.

This blogpost is based on the peer reviewed journal article Rau, J., Harfst, J.-O., & Mast, T. (2025). Platform Badges for Civic Communication: An Interdisciplinary Discussion of a Risk Mitigation Measure Pursuant to Art. 35 DSA. Internet Policy Review, 14(4). https://doi.org/10.14763/2025.4.2054

It was partially funded by the German Federal Ministry of Education and Research (BMBF) under grant number 01UG2450IY (“RISC – Hamburg”) and by the Mercator Foundation (“DSA Research Network”).

Founding History: How the Institute Got Its Name

Seventy-five years ago, on May 30, 1950, the Hans-Bredow-Institut was founded by the former Northwest German Broadcasting Corporation (the predecessor of NDR and WDR) and Universität Hamburg. This blog post recounts the events surrounding the institute’s establishment and sheds light on the origin of its name.

By Hans-Ulrich Wagner.

The minutes are short and concise. On August 13, 1948, Professor Emil Dovifat, doyen of newspaper research since the 1930s and pioneer of the discipline of journalism, suggested the name ‘Hans Bredow’ in his capacity as a specialist at the University of Berlin and as a member of the administrative board of the newly founded public Northwest German Broadcasting Corporation (NWDR).

The name for the planned “Institute for Broadcasting Research” in Hamburg was uncontroversial from the beginning. Hans Bredow (1879-1959), who had played a decisive role in shaping the organization of broadcasting until the Nazis came to power, was celebrated in the post-war years as the “father of broadcasting.”Radio broadcasting began in Germany in October 1923, and in 1948, the 25th anniversary was celebrated with great fanfare, with the former state secretary, who had been forced out of office by the Nazi regime, being hailed as the “pacesetter of German broadcasting.”

However, virtually all other issues are contentious. Much remains to be clarified before Universität Hamburg and NWDR can establish the Hans-Bredow-Institut as a foundation under public law in May 1950. This is because the two organisations initially had conflicting interests. Universität Hamburg reopened in the winter semester of 1945/46. The continuity of the journalism institute during the Third Reich had been severed. Hans Wenke, an energetic professor of education appointed to the Faculty of Philosophy in spring 1947, set about establishing the field of media studies. He gave a lecture on “Young Academics and Academic Training for Broadcasting” to the cultural policy committee of the Zone Advisory Council, offering his “Broadcasting Working Group at Universität Hamburg”, which he had already established at the university.

Northwest German Broadcasting, the central broadcaster in the British occupation zone, which was taking on an ever-increasing range of programming tasks, initially had different interests. Following the closure of the ‘NWDR Broadcasting School’, the question of further academic training for young journalists arose. Furthermore, those responsible for programming wanted to build on the NWDR’s offerings based on research results. Until then, there had only been tentative attempts at systematic audience research, and questions about a possible television program were already pressing.

Negotiations on the financial resources, composition and chairmanship of the board of trustees took many months. Although the NWDR provided the economic foundation, Universität Hamburg wanted to exert decisive influence over the “affiliated institute.” It was not until the 23rd meeting in February 1950 that an agreement was reached. “The Administrative Board approves the Board of Trustees of the Bredow Institute, which now consists of seven members: three NWDR representatives, three university representatives including the rector as chairman, and one higher education department representative from the school administration.” On May 30, 1950, the foundation with legal capacity was established with the “intention of promoting scientific research into the problems of radio and television.”Section 1, paragraph 1 of the statutes states: “In honor of the pioneer of German broadcasting, the foundation bears the name ‘Hans-Bredow-Institut for Radio and Television at Universität Hamburg’.”

Image: Former State Secretary Dr. h. c. Hans Bredow (center) with Dr. Werner Nestel, Technical Director of NWDR (left), and Professor Dr. Hans Wenke, Universität Hamburg (right), shortly before a lecture by Hans Bredow to the Working Group for Broadcasting Studies at Universtität Hamburg in 1948. (Photo: DPD).

“Flood the Zone with Shit” – Elon Musk, the AfD and the Agenda-Setting of the Radical Right in the 2025 German Federal Election

In the current German parliamentary election campaign, the AfD and its top candidate Alice Weidel repeatedly manage to generate a high level of media visibility. This is also decisively linked to the prominent support of US billionaire Elon Musk. The following article explains how Musk’s communicative interventions increase the media presence of Weidel and the AfD and how these dynamics are driven by mechanisms of the digital attention economy.

The article was first published in German on the blog of the Research Institute Social Cohesion (RISC).

Musk Sets the Tone for Public AfD Coverage

“Only the AfD can save Germany,” the American billionaire and advisor to the new Trump administration, Elon Musk, wrote on his short message service “X” in December 2020, in the middle of the German election campaign. Musk had recently attracted attention with radical conspiracy theories, disinformation, insults and, increasingly, communicative interventions in domestic political events in European countries such as Great Britain and Germany. At the end of December, Musk published a guest article in the German newspaper “WELT” [World], followed in January by a live conversation with AfD leader Alice Weidel, which has since been watched by millions of users. The conversation between Weidel and Musk included not only AfD campaign ads, but also numerous half-truths, misinformation and historical revisionist statements (Weidel: “Hitler was a communist”). Celebrity support in the campaign paid off for the frontrunner, for example on platform X, where Weidel’s prominence skyrocketed in time with Musk’s digital interventions (Nenno & Lorenz-Spreen, 2025).

Weidel, Musk and the AfD: X as a Springboard into the Traditional Media Landscape

Similar effects can be observed in the broader German media landscape. The mere announcement of the conversation between Musk and Weidel on X dominated the headlines of major German media outlets:

Online-Berichterstattungen in traditionellen Medien: Titelseiten von BILD, SPIEGEL, taz und DIE ZEIT
Online coverage in traditional media: Front pages of BILD, SPIEGEL, taz and DIE ZEIT

A brief quantitative analysis of German-language online media articles using the Media Cloud analysis tool (see methodological notes at the end) confirms that Musk’s interventions contributed to a significant boost in visibility for AfD top candidate Weidel. The share of media articles containing the terms “Weidel” and “Musk” (dark blue) in relation to articles that only mentioned the term “Weidel” (light blue) increased significantly around the election call (December 20: 37%), the WELT article (December 28: 55%) and the X-Talk (January 9: 91%).

Statistics: Media Coverage of Alice Weidel

The connection became most obvious in the week around the X-talk of Weidel and Musk. On January 9, MediaCloud recorded the highest number of articles with the term Weidel (205 articles) to date, of which 186 also mentioned Musk. Over the week from January 6 to January 12, the number of articles mentioning Weidel skyrocketed to a record 1,243 articles.

Statistics: Media Coverage of the AfD

If we use the search term “AfD” instead of “Weidel”, it becomes clear that the knock-on effects so far seem to have had a stronger impact on the person and less on the party as a whole, although more moderate effects around the key events are also visible there.

The Digital Attention Economy as the Basis for the Hyper-Salient Communication of the Far Right

The rise of the radical right’s media visibility is based on dynamics that exemplify the transformation of discursive power structures in the digital public sphere. Digital media open up a communicative space of opportunity for actors beyond the traditional political map (such as Musk, AfD, Weidel), in which they can both communicate directly with their supporters without going through the media (inward-looking communication) and substantially influence public discourse (outward-looking communication). The latter is an expression of the current hybrid media system, in which digital and traditional media combine and influence each other. The digital attention economy is particularly fundamental to these mechanisms. Here, political and media actors as well as digital platforms compete for the attention of media consumers in order to communicate their messages and generate advertising revenue. Since the amount of available information in the context of digital media has increased dramatically, but the available human attention has remained the same, competitiveness in the digital attention economy has increased significantly. Even more than before, communicative strategies that make it possible to assert oneself in this environment are becoming increasingly important.

Provocation as a Strategic Tool for Agenda Setting and Hacking

The radical right seems to be more aware of the logic of the digital attention economy than other actors and uses it strategically. For example, Elon Musk’s prominent support of the AfD in the German federal election campaign was a particularly successful form of agenda-setting that significantly increased the media visibility of Weidel and the AfD. Weidel’s historically revisionist statement about Hitler and National Socialism was probably based, among other things, on a strategic interest in further shifting the “boundaries of what can be said” and normalizing extreme right-wing narratives. However, the statement also seems to have been placed as a deliberate provocation, which Weidel could be sure would attract the appropriate media attention and increase her own visibility in the limited attention span of the public arena.

US President Donald Trump has already used the principle of attracting attention through targeted and strategically placed provocations with great success in recent years. Even during the Republican primaries in the run-up to the 2016 presidential election, Donald Trump was able to attract a disproportionate amount of media attention by breaking norms and taboos. This helped him win both the primaries and the presidential election (Schroeder 2018, Wells et al. 2016). Steve Bannon, Trump’s former campaign manager and co-founder of the right-wing extremist portal Breitbart, described the principle of the communicative strategy of the first Trump administration as follows: “Flood the zone with shit”. In this reading, the only decisive factor for evaluating the success of this strategy is media visibility. The truthfulness of one’s own statements and the question of whether public coverage of the statements is more positive or more negative do not play a significant role. This is because the positive reception of statements is ultimately ensured by the (digital) communication channels of the supporting political camp, whose reach is constantly expanded by the strategy described.

Radical Right-Wing (Digital) Noise Pollution: DDoS Attacks on Our Minds

The key to this strategy is to deny the political opposition the opportunity to make an appearance or set its own agenda. Instead, they work on the issues and narratives set by the radical right, as do the media and the press, and rarely set issues of their own. US journalist Julia Angwin describes this strategy of agenda-setting through constant taboo-breaking and provocation as a “DDoS attack on our minds“. DDoS attacks (distributed denial-of-service attacks) are actually a cybersecurity term that describes the overwhelming of a web service (e.g. a website) by a flood of (fake) requests. Applied to the AfD’s strategy and the powerful support of Elon Musk, this form of agenda-setting can be described as a systematic overloading of the political will-forming process. The discourse and information practices on which the formation of political will is based are undermined by constant, often empty and provocative provocations. The functionality of the political decision-making process itself seems fundamentally endangered in the long term.

Debate on How to Deal with the Discourse Strategies of the Radical Right

At this point, it is difficult to say what specific impact the described gains in public discourse will have on the outcome of the 2025 federal elections. The findings of this brief analysis illustrate how quickly public discourse can be hijacked by social media in the slipstream of global political events. And they show how fluid the boundaries between radical digital fringe communities and traditional public discourse spaces can sometimes be.

There is certainly room for political and media action to address these developments. The European Digital Services Act (DSA), like the AI Act adopted in the EU in August 2024, offers a broad scope for intervening. In particular, the broad concept of systematic risks for social debate, electoral processes and public security (Art. 34, 35 DSA) offers leverage points for action against the described dynamics of a digital attention economy. The German Bundestag administration and the EU have announced reviews due to the live talk between Musk and Weidel. DSA proceedings have been underway for some time against some major platform operators. With regard to the public debates in the election campaign, a significant part of the decision-making authority over the nature of the coverage lies with the private media houses and public broadcasters, even though the hybrid media system and the digital attention economy are creating considerable centrifugal forces here. Diverse, comprehensive and informed coverage of the AfD seems necessary, especially in the context of a polarized election campaign. However, a permanent media presence can also have undesirable side effects that benefit the discourse strategies of the far right more than they harm them. In view of the political and media gains of the globalized radical right, there is a need for an ongoing debate in society as a whole about the occasions, extent and form in which the digital barrage of the radical right should be given space.

Methodological Notes

MediaCloud is an open-source analytics platform for online media coverage. The platform collects online media articles provided by the RSS feeds of media websites. Our analysis is based on a collection of 257 media websites from Germany, whose coverage in the period from November 1, 2024 to January 28, 2025 was filtered using the keywords listed above. Not all articles that were indexed by the keywords and included in the graphics above have Musk, Weidel or the AfD as their main topic. However, a random manual validation confirms that the articles essentially cover the media coverage of the aforementioned actors.

Literature

Franck, G. (1998). Ökonomie der Aufmerksamkeit: Ein Entwurf [Economy of Attention: A Draft] (12th edition). Edition Akzente. Carl Hanser Verlag.

Nenno, S. & Lorenz-Spreen, P. (2025). Do Alice Weidel and the AfD benefit from Musk’s attention on X? Alexander von Humboldt Institute for Internet and Society. https://www.hiig.de/en/musk-x-and-the-afd/ https://doi.org/10.5281/ZENODO.14749544

Schroeder, R. (2018). Social Theory after the Internet: Media, Technology, and Globalization. UCL Press. https://doi.org/10.2307/j.ctt20krxdr

Wells, C., Shah, D. V., Pevehouse, J. C., Yang, J., Pelled, A., Boehm, F., Lukito, J., Ghosh, S., & Schmidt, J. L. (2016). How Trump Drove Coverage to the Nomination: Hybrid Media Campaigning. Political Communication, 33(4), 669–676. https://doi.org/10.1080/10584609.2016.1224416

Cover image: iStock, Credit: da-kuk

Connect or divide? What the media (should) do

This is it …

EN-Test: The Magic of the Rainbow – an explaining of a natural phenomenon

The Formation of a Rainbow

A rainbow is one of the most fascinating phenomena in nature. It occurs when sunlight hits raindrops and is refracted, reflected, and refracted again. This interaction causes the light to split into its constituent colors, forming a vibrant arc in the sky. The basic colors of the rainbow are red, orange, yellow, green, blue, indigo, and violet.

The angle at which light is refracted is about 42 degrees from its original direction. Therefore, a rainbow is seen when the sun is behind the observer and rain clouds are on the horizon. The perfect combination of light and water droplets creates this impressive spectacle.

Significance and Symbolism of the Rainbow

A rainbow arches over a pony standing on a green meadow. The scene is vibrant with lush grass, a clear blue sky, and the colorful spectrum of the rainbow prominently visible above the pony. A rainbow holds not only scientific significance but also rich symbolism. In many cultures and myths, the rainbow represents hope, peace, and a connection between heaven and earth. In modern times, the rainbow has gained strong symbolic importance as a sign of diversity and acceptance. The LGBTQ+ community has adopted the rainbow as a symbol for their movement to celebrate and protect the diversity and rights of all individuals.

Interesting Facts About Rainbows

  • A rainbow is always a circle, but from the ground, only part of it is usually visible.
  • Double rainbows occur when light is reflected twice inside the raindrops. The second arc is fainter and has reversed colors.
  • Rainbows can also appear at night and are called moonbows. These are often fainter and less colorful.
  • Sometimes, rainbows can be seen in waterfalls or the mist of the ocean.

Clearing the data fog: german far-right research’s requirements for the DSA research data access (EN)

​„Barbie”-Filmanalyse: Alles so schön pink hier?

Medienwissenschaftlerin JOAN BLEICHER über Greta Gerwigs neuen Film „Barbie“ und dessen Rezeption.

Greta Gerwigs neuer Hitfilm „Barbie“ ist ein Genrehybrid aus vielfältigen Bausteinen des populären Kinos. Er ist Roadmovie, Slapstick-Komödie, Animationsfilm, Krimi, Surf-Film, Science Fiction, Familienfilm, Musik- und Bollywood-Tanzfilm in einem. Außerdem wird er als feministische Gesellschaftskritik rezipiert. Durch die ironische Übertreibung von Männlichkeits- und Weiblichkeitsklischees veranschaulicht er den Geschlechterkampf. Barbie steht als Symbol exemplarisch für die Konsumorientierung weiblicher Geschlechterrollen, die die Identitätskonstruktion und Wahrnehmung eigener Interessen verhindert.

Der Film ist zudem eine große PR-Kampagne für Barbie-Hersteller Mattel. Das Warenspektrum rund um die berühmte Spielzeugpuppe wird optisch attraktiv und somit werbewirksam in Szene gesetzt. Interessant ist jedoch, dass zwar sehr viele Merchandising-Artikel verkauft werden, die Puppe selbst jedoch noch keine nennenswerte Absatzsteigerung aufweisen kann.

Alles schön und gut in Barbieland…

Der Film beginnt in einer virtuellen Barbie-Welt, die nur aus Schönheit und Unterhaltung besteht. Inszenierungen von erfolgreichen Frauen (Präsidenten-Barbie, Nobelpreis-Barbie) ersetzen tatsächliche berufliche Aktivitäten und Machtpositionen. Plastik (Körper, Häuser, Natur), Pink und Rosa bilden die visuellen Oberflächen der Künstlichkeit von Barbies Scheinwelt, die sich mit dem Ziel des Selbsterhalts der kapitalistischen Utopie nicht mit der Realität mischen darf.

Barbie bricht jedoch aus ihrer Plastikwelt aus und begibt sich in die Realität. Dort erfährt sie eine „existenzielle Erschütterung“, wie Marie-Luise Goldmann es in ihrem Beitrag in der WELT beschreibt: „Als sie in der echten Welt einen Schulhof betritt, bleibt die erwartete Begeisterung vonseiten der Kinder aus. Stattdessen werfen sie ihr an den Kopf, sie zu hassen und schon seit dem Alter von fünf Jahren nicht mehr mit ihr, einer Faschistin und Kapitalistin, die am ungesunden Körperbild so vieler Frauen schuld sei, zu spielen.“ Implizit wird die narrationstheoretische Frage nach dem Verhältnis von Fiktion und Realität und nach den Erlebnisdimensionen filmischer Bedeutungskonstruktion gestellt.

Feminismus des Barbie-Films

Barbies Kontakt mit der Realität führt zur Anpassung der bislang weiblich dominierten Geschlechterhierarchie der Barbie-Welt. Dies bildet den Ausgangspunkt für die ironische Darstellung von Männerrollen in Handlung (Selbstinszenierung, Wettkämpfe, Gewalt), Sprache und Requisiten (Autos, Sportartikel, Kleidung, Wohnungseinrichtung). Als Barbie mit Ken die echte Welt betritt, dauert es einige Zeit, bis den beiden dämmert, dass hier etwas nicht stimmt: Hier regieren Männer! Ken kann sein Glück kaum fassen. Er könnte hier alles sein: Arzt, Bademeister, sogar Präsident, nicht nur Ken, nicht nur einer, der erst in Verbindung mit Barbie eine Identität erhält. Goldmanns Fazit: „Barbie ist nicht nur ein feministischer Film geworden, sondern ein Film über den Feminismus. Er denkt den Feminismus nicht nur mit, sondern er denkt ihn neu.“

Auch Dietmar Dath unterstreicht in der FAZ den dem Film eingewobenen Feminismus. Ein wichtiger Monolog im Film enthalte etwa viele implizite, „sehr vernünftige feministische Minimalanforderungen“ und sei deshalb geradezu das „große Herz des Films“. Kritiker Matthias Schwardt hingegen bemängelt in der TAZ ebendiesen „Grundschulfeminismus“ und die „schmierigen Hollywood-Klischees“ à la „Ich bin was wert, du bist was wert, alle sind was wert! Feiern wir das Leben der unbegrenzten Möglichkeiten!“

Barbie kombiniert Unternehmens-PR mit kritischer Botschaft

Regisseurin Greta Gerwig setzte sich bereits in einer Reihe von Filmen wie „Lady Bird“ kritisch mit Frauenrollen und Genderklischees auseinander. Mattel wiederum nutzt den Film trotz seiner kritischen Dimensionen als Unternehmens-PR und plant eine Reihe weiterer Verfilmungen der eigenen Spielwarenproduktion, wie beispielsweise Polly. Mathias Schwardt beschreibt die PR-Strategie wie folgt: „Die Traditionsbarbie – blöd, aber hübsch und mit zimmergroßem Kleiderschrank gesegnet – ist die Inkarnation des Frauenverständnisses von Steinzeitmachos. Gerwig verschafft dem Unternehmen nun Credibility. Was, so die Hoffnung, auch die Puppen und den anderen Plastik-Krimskrams plötzlich hip macht.“ Ob diese Strategie tatsächlich durch die geplante Filmreihe realisiert werden kann, bleibt fraglich.

Doch veranschaulicht der Film aus meiner Sicht die Möglichkeiten, bunte Oberfläche, stereotype Rollenklischees und eine humorvolle Selbstironie mit widersprüchlichen Funktionen der Unternehmens-PR und einer kritischen Botschaft zu kombinieren.

Der große Publikumserfolg des Films basiert auf einer langfristigen PR-Strategie. Fotos der Schauspieler*innen statt Informationen über den Filminhalt weckten zunächst ebenso die Neugier wie die Regisseurin Greta Gerwig, deren bisherige Filme so gar nicht zu dem Barbie Image passten. Auch erste Trailer verbargen mehr als sie zeigten. Vielfältige Genrebausteine, implizite Theorien, populäre Erzählmuster, viel Selbstironie garniert mit Pink und Musikstücken bildeten schließlich das attraktive Endprodukt.
Foto: Myke Simon / unsplash

Newsletter

Information about current projects, events and publications of the institute.

Subscribe now