DigiMeet 2025: Platform Governance and Power

What are the current developments in global platform governance? The Digitalisation Research and Network Meeting (DigiMeet), organized by the Bavarian Research Institute for Digital Transformation (bidt), the Center for Advanced Internet Studies (CAIS), the Weizenbaum Institute (WI), and the HBI addressed this question on November 6, 2025. The thematic focus was on underlying power relations, social impacts, and technological advances that shape our political discourse.

Following an opening presentation by Dr. Tobias Mast (HBI) on power relations, responsibilities, and risks in platform governance, 60 doctoral and postdoctoral researchers from the collaborating institutions, along with other young researchers from various disciplines and countries, discussed the topic of “Platform Governance & Power: Between Control, Ethics, and Societal Dynamics.” Four sessions with a total of 16 presentations took place.

Track 1a: Platform Regulation and Community Building – Regulatory Frameworks

This session discussed the challenges of regulating social media platforms.

In his presentation, “The Human Rights Responsibility of Social Media Platforms — Meta and the Oversight Board,” Tamás Pongó (Hungary) spoke about the Meta Oversight Board (MOB) and its compliance with the United Nations Guiding Principles on Business and Human Rights. He emphasized the importance of transparency, accessibility, and availability in grievance mechanisms. Tamás Pongó emphasized that Meta is the only platform that has established such an oversight board. However, his findings suggest that the MOB does not meet these criteria. It is not accessible, has a negative impact on transparency, and is not predictable. Most importantly, cases are selected strategically.

Sophia Salziger (Netherlands) discussed Article 82 GDPR and the right to compensation for non-material damage in her presentation, “Balancing Democratic Values and Accountability Within the CJEU’s Civil Liability Regime for Non-Material Damage Under Article 82 GDPR.” Although “non-material damage” is broadly defined to include “minor” and possible future damage, the compensation system lacks a punitive or preventive function, rendering Article 82 less effective.

Vincent Heimburg et al. (Germany and Austria) identified three patterns in this strategy in their presentation “The Gatekeeper’s Gambit: How Platforms Strategically Adapt to the Digital Markets Act (DMA)”: formal compliance; strategic containment; protracted engagement, undermining, and resistance. Above all, platforms undermine the aims of the DMA by exploiting regulatory ambiguities (“open legal situation”).

Sophia Graf (Germany) compared three adult platforms considered Very Large Online Platforms (VLOPs) under the Digital Services Act (DSA) in her presentation, “Governing Synthetic Non-Consensual Intimate Imagery on Major Adult Platforms in the EU.” Her findings show that these platforms vary greatly in how they define and prohibit non-consensual intimate imagery (NCII) and other harmful content on their websites.

Track 2a: Platforms as Shapers and Instruments of Governance – Opportunities and Challenges of AI

This session discussed the role of artificial intelligence in the use and impact of digital platforms.

Adrian Schadl (Germany) gave a presentation on “Generative AI as the New Complementor: Transforming Power Relations and Innovation Governance in Platforms,” in which he discussed how generative AI is used on platforms and its potential to alter the relationships between platform operators and complementors. One finding of his empirical study is that the use of generative AI contributes to power asymmetries in favor of platform operators.

In her presentation, “The AI Media Divide: How the Global North and South Frame Risk and Promise,” Barbara Maria Farias Mota (Germany) presented a comparative analysis of several thousand newspaper articles on AI from countries in the Global North and Global South.

In their presentation, “Explainable Decision-Making for Hate Speech Moderation via AI Multi-Agent Systems,” Nils Riekers et al. proposed using AI to solve the problem of recognizing and flagging hate speech, which still requires human intervention. Their proposed solution is a multi-agent system architecture with a human-in-the-loop rule set, resulting in an “explainable-by-design” interface.

Maurice Stenzel (Germany) introduced a new code of conduct for human-machine decision-making in content moderation in his presentation “Governing Moderation through Codes of Conduct: Co-Creating a Soft Law Framework for Platform Governance.” The code contains recommendations regarding various aspects of platform governance and moderation, including the degree of automation, human support interfaces, data protection, fairness and non-discrimination, and moderator training. The code of conduct is publicly available at Strengthening Trust.

Track 1b: Platform Regulation and Community Building – Networks and Discourses

The session highlighted the extent to which today’s platforms shape political communication, truth verification, activism, and the communication of violence.

In his presentation, “Vibes, TikTok Edits, and Audio Memes: Participatory Propaganda in the 2025 German Federal Election Campaign,” Marcus Bösch used the 2025 federal election as an example to show how TikTok has become a central venue for “participatory propaganda” through edits, memes, and audio snippets. In particular, the Alternative for Germany (AfD) party has benefited from fan edits and remix cultures that exploit the platform’s specific features and contribute to a form of “dark participation.”

Regina Cazzamatta (Germany) analyzed the consequences of Meta’s withdrawal from established fact-checking programs in her presentation, “From Moderation to Chaos: Meta’s Fact-Checking Fallout: A Win for Free Speech or a Loss for Truth?” Depending on the country, her findings show that uncertainties, power shifts, and risks arise—in the EU due to stricter regulations and in Latin America due to a lack of regulatory frameworks. At the same time, there is growing concern about the political instrumentalization of correction mechanisms, such as community notes, and about economic losses due to moderation decisions.

In her presentation, “Social Media Platforms: From Carriers of Activism to Documenting and Mediating Violence in Syria,” Lama Ranjous (Germany) discussed the role of social media in the Syrian conflict. She showed how these platforms first became venues for activism but then increasingly became infrastructure for documenting and mediating violence. Today, these platforms serve both educational and propaganda purposes, as well as psychological warfare and recruitment. This is particularly problematic for survivors, especially considering AI-assisted manipulation and deepfakes.

Philipp Riederle (UK) raised the question of digital interoperability as a response to platform power in his presentation, “Digital Platform Interoperability—Strengthening User Choice, Reducing Platform Power?” Using Mastodon as an example, he discussed how users can theoretically switch between instances for reasons such as community, governance preferences, or service quality. However, in practice, they often remain tied to specific platform spaces due to feelings of being overwhelmed, social ties, and unclear options.

Track 2b: Platforms as Shapers and Instruments of Governance – Platforms as Governance Facilitators

This session focused on the crucial role of platforms in shaping public discourse and government action. For example, platforms play a key role in crisis situations, facilitating communication, mobilization, and resource allocation.

Richard Uth et al. (Germany) presented their research on the implementation of artificial intelligence systems in healthcare and how to design trustworthy mHealth platforms in their presentation, “AI Decides on Our Health: How Interactivity, Autonomy, and Decision Focus Shape User Responses.” Their study of test patients’ interactions with AI-supported chatbots revealed that patients can accept such support systems if they preserve their autonomy.

Frederike Eulitz et al. (Germany) presented their research findings on the design of working conditions for freelancers on the Upwork platform in their presentation, “How Online Labor Markets Shape Participation Through Governance: Evidence from Upwork.” Their study demonstrates that a change in the platform’s fee structure is not an effective way to increase interaction frequency between freelancers and certain clients.

Beltsazar Krisetya (UK) gave a presentation on “Supporting Procedures, Avoiding Politics: Platform (Dis)Engagement in Indonesia’s 2024 Election.” Using the 2024 Indonesian election as an example, she examined how content moderation practices change during political crises. Her data is intended to explain responsibilities, moderation levers, interactions between stakeholders, and traceable processes.

In his presentation, “Empowering Municipal Resilience: Digital Platforms for Citizen-Driven Innovation in Crisis Management,” Timon Sengewald (Germany) explored what a digital platform supporting citizen-driven innovation and strengthening the digital resilience of cities should look like. As part of his project, he is developing an AI-powered platform to connect municipal actors.

About DigiMeet

The Digitalization Research and Network Meeting (DigiMeet) is a joint event organized by BIDT, the Center for Advanced Internet Studies (CAIS), the Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI), and the Weizenbaum Institute (WI). The event is designed for doctoral students and postdoctoral researchers conducting digitalization research, offering them the opportunity to present their work, network, and establish new collaborations.

Last update: 19.12.2025

Newsletter

Information about current projects, events and publications of the institute.

Subscribe now