In:
網際網路技術學刊, Angle Publishing Co., Ltd., Vol. 23, No. 5 ( 2022-09), p. 1109-1116
Kurzfassung:
〈p〉Machine learning (ML) has been widely adopted in many software applications across domains. However, accompanying the outstanding performance, the behaviors of the ML models, which are essentially a kind of black-box software, could be unfair and hard to understand in many cases. In our human-centered society, an unfair decision could potentially damage human value, even causing severe social consequences, especially in decision-critical scenarios such as legal judgment. Although some existing works investigated the ML models in terms of robustness, accuracy, security, privacy, quality, etc., the study on the fairness of ML is still in the early stage. In this paper, we first proposed a set of fairness metrics for ML models from different perspectives. Based on this, we performed a comparative study on the fairness of existing widely used classic ML and deep learning models in the domain of real-world judicial judgments. The experiment results reveal that the current state-of-the-art ML models could still raise concerns for unfair decision-making. The ML models with high accuracy and fairness are urgently demanding.〈/p〉
〈p〉 〈/p〉
Materialart:
Online-Ressource
ISSN:
1607-9264
,
1607-9264
Originaltitel:
Fairness Measures of Machine Learning Models in Judicial Penalty Prediction
DOI:
10.53106/160792642022092305019
Sprache:
Unbekannt
Verlag:
Angle Publishing Co., Ltd.
Publikationsdatum:
2022