Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66
Deciding About, By and Together with Algorithmic Decision Making Systems

Deciding About, By and Together with Algorithmic Decision Making Systems

Deciding about, by and together with algorithmic decision making systems s also becoming increasingly important in the field of public communication. Therefore, the Hans-Bredow-Institut participates in an interdisciplinary project on this topic, which is funded by the Volkswagen Foundation within the funding line “Artificial Intelligence and the Society of the Future“ for four years.

For this, machine learning algorithms that are stored in decision trees or neutral networks are used to deduce decision rules from the input data (algorithmic decision making; “ADM”). Over time, the AI tool improves itself by learning from its past decisions, correct or incorrect.

The overarching ambit of this project is to examine whether there are limitations to this kind of ADM. ADM systems are becoming increasingly popular, especially within notoriously cash-strapped criminal justice systems (“CJS). Within western CJS, especially those of the USA and the UK, these tools are used at various stages of the criminal justice process to assess the risk a particular individual poses to the public (e.g. the risk of reoffending). In the USA, major civil liberties unions such as the ACLU have even advocated their use at all stages of the criminal process to avoid possible human biases.

Against this backdrop, the project examines the questions
  1. How do humans make decisions about other humans compared with how do ADM systems make the same decisions about humans?
  2. How do humans in conjunction with ADM systems take decisions about other humans?
  3. What are the limits where machines should make decisions about people?
  4. And how can states decide whether ADM systems should be used within criminal justice systems at all.
show more

Project Description

Details will follow soon.

Project Information

Overview

Duration: 2019-2022

Research programme:
RP2 - Regulatory Structures and the Emergence of Rules in Digital Communication

Third party

VolkswagenStiftung, Förderline "Künstliche Intelligenz - Ihre Auswirkungen auf die Gesellschaft von morgen"

Cooperation Partner

Principle Investigators

Prof. Dr. Anja Achtziger, Sozial- und Wirtschaftspsychologin (Zeppelin University)
Prof. Dr. Wolfgang Schulz, Rechts- und Medienwissenschaftler (Direktor des Hans-Bredow-Instituts und des Humboldt Institut für Internet Governance)
Prof. Dr. Georg Wenzelburger, Politikwissenschaftler (TU Kaiserslautern)
Prof. Dr. Karen Yeung, Rechtswissenschaftler und Ethikerin in den Fachbereichen Jura und Informatik (University of Birmingham)
Prof. Dr. Katharina A. Zweig, Biochemikerin und Informatikerin (TU Kaiserslautern)

Contact person

Prof. Dr. Wolfgang Schulz
Director (Chairperson)

Prof. Dr. Wolfgang Schulz

Leibniz-Institut für Medienforschung │ Hans-Bredow-Institut (HBI)
Rothenbaumchaussee 36
20148 Hamburg

Tel. +49 (0)40 45 02 17 0 (Sekretariat)

Send Email

MAYBE YOU ARE ALSO INTERESTED IN THESE TOPICS?

Newsletter

Subscribe to our newsletter and receive the Institute's latest news via email.

SUBSCRIBE!