New Code of Conduct for Human-Machine Decision-Making Processes in Content Moderation

Prof. Dr. Matthias C. Kettemann, head of the research program “Regulatory Structures and the Emergence of Rules in Online Spaces,” presented a code of conduct for human-machine decision-making processes in content moderation to representatives of European supervisory authorities and national Digital Services Coordinators (DSCs) on November 10.

The aim of the code, entitled “Strengthening Trust” and developed at the Alexander von Humboldt Institute for Internet and Society as part of the “Human in the Loop” project with the participation of Matthias C. Kettemann and Katharina Mosene, is to supplement the Digital Services Act (DSA) with practical, industry-specific standards. To this end, it defines guidelines for fair, transparent, and comprehensible moderation processes on digital platforms and large online services.

Click here to view the Code of Conduct in English.

Why Is a Code of Conduct Necessary?

Effective content moderation ensures an online communication environment in which terrorism and hate speech have no place. People cannot achieve this alone. Neither can machines, i.e., automated filters. That is why cooperation between people and machines is necessary. While laws such as the Digital Services Act (DSA) are important, standards also help improve platforms’ practices. Just as the code on disinformation has been successful for many years, our code of conduct aims to create real added value. It is based on empirical research and has been discussed with all stakeholders as part of an intensive process,” explains Matthias C. Kettemann, adding: “It is an extraordinary moment that key European regulatory authorities are now paying attention to this code of conduct at the highest level. Research-based, consensual standards are on their way to becoming the benchmark in platform regulation across Europe — to protect freedom of expression, diversity, and democratic participation.”

Platforms such as Meta and TikTok review posts daily to determine which ones remain visible, are removed, or are restricted. This content moderation aims to limit harmful content, such as hate speech, violence, and disinformation. At the same time, however, freedom of expression and diversity in the digital space must be protected. Due to the vast amount of content produced daily, these platforms rely on semi-automated decision-making processes for content moderation. AI systems detect and filter harmful and problematic content while human moderators make decisions in complex or culturally sensitive cases. Additionally, algorithmic recommendation mechanisms determine which content appears more prominently in users’ feeds and which content becomes barely visible. In practice, however, these processes have their limits, particularly with regard to fairness, traceability, control, and the protection of fundamental rights.

What Does the Code of Conduct Regulate?

The Code of Conduct establishes ten clear guidelines for fair, transparent, and comprehensible moderation processes on digital platforms and major online services. The aim is to strengthen trust in human-machine decisions in content moderation. At the same time, the Code of Conduct addresses challenges related to discrimination protection, data security, the mental health of human moderators, feedback processes, and independent monitoring. While the Digital Services Act (DSA) establishes minimum legal standards, it leaves many practical requirements unaddressed. The Code of Conduct provides concrete normative guidelines for these requirements.

The key guidelines include:

  • Systematic assessment of the risks and impacts of automated systems
  • Protective measures for marginalized groups against discrimination and digital violence
  • Compliance with high data protection standards
  • Consideration of the mental health of human moderators
  • Clear and accessible feedback and complaint structures for users
  • Responsible design and development of technology
  • Independent control and evaluation mechanisms

Each guideline contains verifiable implementation proposals for platforms.

Who Is the Code of Conduct Intended For?

The Code of Conduct is intended primarily for platform operators and international providers of digital services. It is also relevant for supervisory authorities, policymakers, and civil society organizations.

How Did the Code of Conduct Come into Being?

It came into being as part of the “Human in the Loop?” research project at the Alexander von Humboldt Institute for Internet and Society (HIIG), with which the HBI cooperates closely. The code draws on preliminary work by Matthias Kettemann at the HBI, scientific analyses, case studies, and the participation of external experts from civil society, platform companies, legal scholars, EU authorities, and NGOs.

The project is funded by the Mercator Foundation.

Further information: https://graphite.page/coc-strengthening-trust/#background

Photo collage: HIIG

Last update: 13.11.2025

Newsletter

Information about current projects, events and publications of the institute.

Subscribe now