Platform Badges for Civic Communication. Addressing Distorted Attention Distribution Logics on Digital Platforms
How can platforms address distortions in the digital attention economy without restricting free expression excessively? This blog post explores how new incentive structures can promote constructive communication on digital platforms and the potential of the Digital Services Act to facilitate such interventions.
Digital platforms have become essential infrastructures of contemporary public communication. Their algorithmic systems determine which actors and messages gain visibility and shape the conditions of democratic discourse.
However, due to incentives within the digital attention economy and the maximization of engagement, current platform architectures often favor content that is conflict-oriented, emotionally polarizing, or misleading. As a result, communication that aligns with democratic norms – such as accuracy, reciprocity, respect, and a problem-solving orientation – faces an uphill battle to achieve comparable levels of salience.
The Digital Services Act (DSA) introduces a risk-based governance framework that requires very large online platforms (VLOPs) to identify and mitigate systemic risks to civic discourse (Art. 34, 35 DSA). While discussions often focus on restrictive measures such as content removal or account sanctions, the DSA also explicitly allows platforms to adapt recommender systems and interface design as mitigation strategies.
A promising approach in this direction is the introduction of platform badges for civic communication. Badges represent a governance-by-design mechanism that creates positive incentives for communicators who commit to democratic communication norms. Rather than restricting content, platforms can support a more balanced distribution of attention by selectively amplifying norm-compliant communication.
The Basic Concept
Civic communication badges would be voluntary, user-facing, and visibility-relevant. Users opting in would commit to defined duties of care, for example:
- avoiding the intentional or negligent dissemination of disinformation and misinformation,
- adopting deliberative communication standards such as rationality, reciprocity, respect, and constructiveness.
These commitments are modest but meaningful. They reflect established norms in existing media self-regulation regimes (e.g., the German Press Code) and are adaptable to different platform environments.
In exchange, the platform integrates the badge as a ranking signal: users with badges receive increased visibility in feeds, recommendations, or replies. Visibility advantages remain proportionate and must not override all other ranking criteria.

Illustration: A governance-by-design mechanism to shape the attention distribution logic on digital media platforms and increase the salience of civic communication
Why Visibility Matters
Visibility is a central resource in digital communication. Current platform architectures often prioritize content that maximizes engagement regardless of communicative quality. This leads to:
- disproportionate reach for disinformation;
- high amplification of antagonistic and negatively emotion-charged content;
- a fundamental disadvantage for communicators who aim for accuracy or deliberation.
The badge mechanism aims to partially offset this structural distortion by establishing an alternative logic of attention grounded in civic norms. Rather than countering harmful communication solely through deletion or demotion, platforms create a parallel approach that actively supports constructive content.
This does not eliminate political conflict – a normal and necessary aspect of democratic life – but helps prevent its escalation into persistent antagonism and hyperpolarization. Constructive communicators gain greater access to attention, whereas polarization-dependent strategies lose some of their algorithmic advantage.
Acquisition, Compliance, and Enforcement
To obtain a badge, users complete a brief onboarding process that explains the underlying norms. This may include short explanations, examples, and consequences of non-compliance. Access criteria (e.g., minimum account age or verification procedures) can reduce abuse from spam or coordinated manipulation.
Compliance with the duties of care can be assessed by building on and extending existing moderation processes. Platforms already rely on combinations of automated detection, human review, and user reports; these systems can be expanded to evaluate badge compliance. Assessments should be based on a user’s overall communication rather than individual posts to minimize false positives and reduce overenforcement.
Violations would result in a tiered response: warnings, temporary suspension of badge status, or, in severe or repeated cases, loss of the badge. All outcomes would fall under the DSA’s complaint and review mechanisms, ensuring procedural safeguards.
A Non-Repressive Intervention for a Distorted Attention Landscape
The badge mechanism does not seek to eliminate problematic communication, nor does it impose censorship-like restrictions on individual users or content. Instead, it introduces a corrective to the structural imbalance that currently favors antagonistic, sensational, or misleading communication. It offers an opportunity space for communicators who align with civic norms and introduces incentives for others to adopt similar standards.
In a digital environment where platforms’ architectures deeply shape public discourse, inaction would implicitly preserve the current distortive dynamics. Civic communication badges offer a moderate, structurally oriented intervention that strengthens democratic resilience by rebalancing visibility – not by restricting speech, but by rewarding communicators who support a functional digital public sphere.
A Risk Mitigation Measure Consistent with Fundamental Rights
The badge system proposed here can be understood as a risk mitigation measure under Articles 34 and 35 DSA, because it addresses the changes outlined there regarding algorithm design and moderation practices and counteracts the systemic risks mentioned: negative impacts on civic discussion, elections, public safety, and fundamental rights. However, as a risk mitigation measure, it must also adhere to fundamental rights considerations.
Unlike public authorities, platforms are not directly bound by fundamental rights. Nonetheless, Art. 35(1) DSA requires them to consider fundamental rights when implementing risk mitigation strategies. Therefore, the design must limit self-censorship or unfair effects while encouraging democratic communication. At the same time, platforms have their own rights (e.g., the freedom to operate a business), which generally give them more regulatory flexibility than public entities – supporting, from a legal perspective, the use of a badge as a way to handle “awful but lawful” content.
Four factors are especially important from a fundamental rights perspective:
- User autonomy: It is essential that users can make an informed choice, free from pressure, about whether they want the badge. The more de facto coercion – such as significant disadvantages or social pressure – the weaker the legitimacy of the user’s decision, because the opt-in is not truly voluntary.
- Neutrality toward opinions: It becomes especially problematic if badges are linked to specific views. Conversely, the level of fundamental rights protection is significantly weaker for clearly false statements of fact, particularly regarding affected third-party rights. Generally, it is better to focus on style and tone (e.g., aggressive, demeaning) rather than on content or positions.
- Equality of communicative opportunities: A visibility boost creates inequality between users with and without a badge. However, the badge is, in principle, accessible to everyone. It can also be argued that the badge compensates for existing algorithmic disadvantages affecting calmer and more factual (i.e., supposedly more “boring”) content.
- Proportionality: The badge is designed as a soft intervention because users can choose. At the same time, there is a trade-off: more autonomy (easy switching, quick reinstatement) may be less effective; greater strictness may be more effective but interfere more strongly with freedom rights. These tensions are typical of fundamental rights balancing.
This blogpost is based on the peer reviewed journal article Rau, J., Harfst, J.-O., & Mast, T. (2025). Platform Badges for Civic Communication: An Interdisciplinary Discussion of a Risk Mitigation Measure Pursuant to Art. 35 DSA. Internet Policy Review, 14(4). https://doi.org/10.14763/2025.4.2054
It was partially funded by the German Federal Ministry of Education and Research (BMBF) under grant number 01UG2450IY (“RISC – Hamburg”) and by the Mercator Foundation (“DSA Research Network”).
Last update: 13.01.2026
Research programme:
RP 2 Regulatory Structures and the Emergence of Rules in Online Spaces