Format:
Online-Ressource
ISSN:
2567-8833
Content:
Abstract: This article discusses fairness in artificial intelligence (AI) based policing procedures using facial recognition as an example. Algorithmic decisions based on discriminatory dynamics can (re)produce and automate injustice. AI fairness here concerns not only the creation and sharing of datasets or the training of models but also how systems are deployed in the real world. Quantifying fairness can distract rom how discrimination and oppression translate concretely into social phenomena. Integrative approaches can help actively incorporate ethical, legal, social, and economic factors into technology development to more holistically assess the consequences of deployment through continuous interdisciplinary collaboration. https://www.tatup.de/index.php/tatup/article/view/7037
In:
volume:32
In:
number:1
In:
year:2023
In:
Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, München : oekom verlag GmbH, [2017]-, 32, Heft 1 (2023), 2567-8833
Language:
German
DOI:
10.14512/tatup.32.1.24
URN:
urn:nbn:de:101:1-2023041118444591642317
URL:
https://doi.org/10.14512/tatup.32.1.24
URL:
https://nbn-resolving.org/urn:nbn:de:101:1-2023041118444591642317
URL:
https://d-nb.info/1285923669/34
URL:
https://www.tatup.de/index.php/tatup/article/view/7037/11797