This is an outdated version published on 2023-01-16. Read the most recent version.

Encoding normative ethics: On algorithmic bias and disability


  • Ian Moura



Computer-based algorithms have the potential to encode and exacerbate ableism and may contribute to disparate outcomes for disabled people. The threat of algorithmic bias to people with disabilities is inseparable from the longstanding role of technology as a normalizing agent, and from questions of how society defines shared values, quantifies ethics, conceptualizes and measures risk, and strives to allocate limited resources. This article situates algorithmic bias amidst the larger context of normalization, draws on social and critical theories that can be used to better understand both ableism and algorithmic bias as they operate in the United States, and proposes concrete steps to mitigate harm to the disability community as a result of algorithmic adoption. Examination of two cases — the allocation of lifesaving medical interventions during the COVID-19 pandemic and approaches to autism diagnosis and intervention — demonstrate instances of the mismatch between disabled people’s lived experiences and the goals and understandings advanced by nondisabled people. These examples highlight the ways particular ethical norms can become part of technological systems, and the harm that can ripple outward from misalignment of formal ethics and community values.





How to Cite

Moura, I. (2023). Encoding normative ethics: On algorithmic bias and disability. First Monday, 28(1).



1. Through quantification, measurement, and categorization, information systems often intensify both the surveillance and erasure of disabled people