Abstract / Description of output
Several scholarly studies and journalistic investigations have found that automated decision-making in welfare systems burdens claimants by forecasting their behaviour, targeting them for sanctions and surveillance and punishing them without revealing the underlying mechanisms driving such decisions. This article develops an analytical framework combining three areas of concern regarding automation: how it might introduce surveillance and social sorting, how it can entail the loss of human discretion, and how it requires new systems of governance and due process. This framework steers investigations into whether and how automated decision-making welfare systems introduce new harms and burdens for claimants. A case study on automation processes in Germany’s unemployment benefit service’s IT system ALLEGRO applies this approach and finds that this system does allow for broad human discretion and avoids some forms of surveillance, such as risk-assessments from historic data, though it nevertheless increases surveillance of claimants through sharing data with external agencies. The developed framework also suggests that concerns raised in one area – whether loss of human discretion, surveillance, or lack of due process – can be mitigated by attending to the other two areas and urges researchers and policy-makers to attend to the mitigating or reinforcing factors of each concern.
Keywords / Materials (for Non-textual outputs)
- social sorting
- digital welfare
- due process
- data justice
- social security