Encoding normative ethics: On algorithmic bias and disability
Computer-based algorithms have the potential to encode and exacerbate ableism and may contribute to disparate outcomes for disabled people. The threat of algorithmic bias to people with disabilities is inseparable from the longstanding role of technology as a normalizing agent, and from questions of how society defines shared values, quantifies ethics, conceptualizes and measures risk, and strives to allocate limited resources. This article situates algorithmic bias amidst the larger context of normalization, draws on social and critical theories that can be used to better understand both ableism and algorithmic bias as they operate in the United States, and proposes concrete steps to mitigate harm to the disability community as a result of algorithmic adoption. Examination of two cases — the allocation of lifesaving medical interventions during the COVID-19 pandemic and approaches to autism diagnosis and intervention — demonstrate instances of the mismatch between disabled people’s lived experiences and the goals and understandings advanced by nondisabled people. These examples highlight the ways particular ethical norms can become part of technological systems, and the harm that can ripple outward from misalignment of formal ethics and community values.
- 2023-02-07 (2)
- 2023-01-16 (1)
How to Cite
Copyright (c) 2022 First Monday
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Authors retain copyright to their work published in First Monday. Please see the footer of each article for details.