Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66

When Scholars Sprint, Bad Algorithms Are on the Run

When Scholars Sprint, Bad Algorithms Are on the Run

Im ersten Research Sprint des Projekts „Ethik der Digitalisierung“, finanziert von der Mercator-Stiftung, haben sich internationale Forscher*innen mit dem Einsatz von KI in der Moderation von Online-Inhalten befasst. PD Dr. Matthias Kettemann und Alexander Pirang geben in diesem Blogartikel einen Überblick über die zentralen Ergebnisse.

Zum vollständigen Artikel

Auszug
In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted  material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?


Kettemann, M. C.; Pirang, A. (2020): When Scholars Sprint, Bad Algorithms Are on the Run. In: HIIG Digital Society Blog, 3.12.2020, online: https://www.hiig.de/en/when-scholars-sprint-bad-algorithms-are-on-the-run

When Scholars Sprint, Bad Algorithms Are on the Run

Im ersten Research Sprint des Projekts „Ethik der Digitalisierung“, finanziert von der Mercator-Stiftung, haben sich internationale Forscher*innen mit dem Einsatz von KI in der Moderation von Online-Inhalten befasst. PD Dr. Matthias Kettemann und Alexander Pirang geben in diesem Blogartikel einen Überblick über die zentralen Ergebnisse.

Zum vollständigen Artikel

Auszug
In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted  material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?


Kettemann, M. C.; Pirang, A. (2020): When Scholars Sprint, Bad Algorithms Are on the Run. In: HIIG Digital Society Blog, 3.12.2020, online: https://www.hiig.de/en/when-scholars-sprint-bad-algorithms-are-on-the-run

Infos zur Publikation

ÄHNLICHE PUBLIKATIONEN UND VERWANDTE PROJEKTE

Newsletter

Infos über aktuelle Projekte, Veranstaltungen und Publikationen des Instituts.

NEWSLETTER ABONNIEREN!