This article conceptualises and provides an initial roadmap for operationalising a feminist data ethics of care framework for the subfield of artificial intelligence (‘AI’) known as ‘machine learning’. After outlining the principles and praxis that comprise our framework, and then using it to evaluate the current state of mainstream AI ethics content, we argue that this literature tends to be overly abstract and founded on a heteropatriarchal world view. We contend that because most AI ethics content fails to equitably and explicitly assign responsibility to actors in the machine learning economy, there is a risk of implicitly reinforcing the status quo of gender power relations and other substantive inequalities, which contribute to the significant gap between AI ethics principles and applied AI ethics more broadly. We argue that our feminist data ethics of care framework can help to fill this gap by paying particular attention to both the ‘who’ and the ‘how’, as well as by outlining a range of methods, approaches, and best practices that societal actors can use now to make interventions into the machine learning economy. Critically, feminist data ethics of care is unlikely to be achieved in this context unless all stakeholders, including women, men, and non-binary and transgender people, take responsibility for this much needed work.
2. What and why: A feminist data ethics of care framework for machine learning
3. Who: Humans in the machine learning economy
4. How: Operationalising feminist ethics of care for machine learning
5. Conclusion and future research
Calls for feminist interventions into artificial intelligence (‘AI’) have grown louder and more numerous in recent years (Bassett, et al., 2020; D’Ignazio and Klein, 2020; Neff, 2020). Many of these calls seek to move beyond the AI ethics conversation (Crawford, et al., 2019; Metcalf, et al., 2019), by which we mean discussions about the ethical implications of “systems that display intelligent behaviour by analysing their environment and taking actions with some degree of autonomy to achieve specific goals” (European Commission, 2021). These interventions are much needed given the arguably limited impact of AI ethics content in practice (Crawford, et al., 2019) and the relative dearth of practical guidance around applied AI ethics (Morley, et al., 2020), especially for those seeking to investigate and address specific problems like racism, sexism, or classism.
Feminist scholars have made significant inroads in developing diverse theories, methods and best practices for data studies; however, their focus to date has largely been on social media, as opposed to AI. In this article we seek to extend this work, drawing particularly on the ground-breaking efforts of Luka and Leurs (2020), who in turn draw on the work of their colleagues (see e.g., Bivens, 2015; Luka and Millette, 2018), to explore what feminist data ethics for AI could look like in practice. Specifically, we aim to develop a feminist data ethics of care approach for the subfield of AI known as ‘machine learning’ (‘ML’). We define machine learning as algorithms, or models, that analyse and draw inferences from patterns in large datasets for human and/or automated decision-making, involving varying degrees of human instruction or supervision. The limited scholarship on feminist approaches to machine learning is significant given the “explosion” of this technology in everyday life, from recommender systems (e.g., videos on YouTube and Netflix), to personalised products (e.g., Apple smartphones and watches), targeted advertising (e.g., when scrolling through content on Instagram) and law enforcement (e.g., police use of facial recognition software).
While this technology can produce a range of benefits, it can also cause serious harm when it is used in a way that, for example, unfairly discriminates against people based on gender, class or race. For instance, when recidivism risk software suggest people of colour are highly likely to commit future crimes, regardless of individual criminal records (Angwin, et al., 2016). Harms can arise from issues at any stage in the machine learning pipeline — the often step-by-step processes of data collection and preparation, system development, and deployment of machine learning enabled technologies (Suresh and Guttag, 2021). Common issues include biases embedded during data collection and classification (Alvi, et al., 2018), but they can also occur due to choices made during the design, development and deployment of machine learning infrastructure and technologies (Hasselbalch, 2021). The machine learning economy — the infrastructure, processes, organisations and individuals involved in the development and deployment of machine learning technologies — is therefore an important site of study.
The aim of this article is to conceptualise and provide an initial roadmap for operationalising a feminist data ethics of care framework for the machine learning economy. In Section 2, we outline our feminist data ethics of care framework that is underpinned by two main principles: machine learning, like other types of AI, is “partial, situated and contextual” (Haraway, 1988; Leurs, 2017); and a range of representational and allocative harms can arise in the machine learning pipeline (Barocas and Selbst, 2016). These principles are the foundation of our feminist data ethics of care praxis for preventing harm that focuses on: (i) ensuring diverse representation and participation in the machine learning economy; (ii) critically evaluating positionality; (iii) centring human subjects at every stage of the machine learning pipeline; (iv) implementing transparency and accountability measures; and (v) equitably distributing the responsibility for operationalising a feminist data ethics of care praxis. The latter is particularly important given that limited attention has been paid in AI ethics scholarship to the significance of gender power relations and the often-gendered nature of ‘care work’ (Organisation for Economic Co-operation and Development [OECD], 2018).
In Section 3, we evaluate the state of AI ethics content against our feminist data ethics of care framework, arguing that it tends to be overly abstract and is often founded on a heteropatriarchal world view. We contend that because most of these initiatives fail to explicitly assign responsibility for operationalising AI ethics, they risk implicitly reinforcing the status quo of gender power relations and other substantive inequalities, which contribute to the significant gap between AI ethics principles and applied AI ethics more broadly. In Section 4, we argue that our feminist data ethics of care framework can help to fill this gap by paying particular attention to both the ‘who’ and the ‘how’, with a view to achieving the equitable distribution of responsibility for operationalising feminist data ethics of care in the machine learning economy. Our framework also outlines key methods, approaches, and best practices that can be used now to make feminist data ethics of care interventions into the machine learning economy.
Overall, we argue that feminist data ethics of care is unlikely to be achieved in practice unless all stakeholders, including women, men, and non-binary and transgender people, take responsibility for this work. While our framework focuses on machine learning, we contend that it has purchase across other subfields of AI, including neural networks, robotics and expert systems (Bullinaria, 2005). We conclude, in Section 5, by setting an ongoing research agenda for feminist data ethics of care in the machine learning economy and beyond.
2. What and why: A feminist data ethics of care framework for machine learning
There is no universal feminist theory or practice. As Cifor, et al. (2019) explain, “feminism is plural; there are many feminisms and they may differ in their positive visions, methodologies, collective ends, and situated concerns” . Feminists are thus “an enormously diverse group of people with varying opinions” shaped by Black, queer, trans, Latinx, Indigenous and other lived experiences . While feminist discourses are contested, many are united in action against gender subordination (Henry, et al., 2021) and in “a refusal of inheritance”  of the heteropatriarchal status quo across contexts and social domains. In line with these ideals, we conceptualise feminism in terms of achieving the substantive equality of rights, opportunities, responsibilities and outcomes for women , men, nonbinary, and transgender people. This is a pressing policy area given that “gender inequality persists everywhere and stagnates social progress” (United Nations, n.d.), to the detriment of women, girls and nonnormative identities especially (see, e.g., United Nations, 2019).
More specifically, our feminist perspective is founded on ethics of care, which many scholars recognise as an inherently feminist approach to ethics (D’Olimpio, 2019; Luka and Millette, 2018). In contrast to dominant moral theories that emphasise “rationality”, “logic” and a state of being unemotional, often in line with purportedly objective rules (see e.g., Bentham, 1996; Kant, 1993; Mill, 1859), ethics of care theorists argue for a contextualised and relational approach (Raghuram, 2019). Rejecting the dichotomy between reason and emotion, such theorists often apply ethical rules and undertake the work of caring for others in a manner involving emotions, including compassion and empathy. Within an ethics of care framework, “caring” is “a species of activity that includes everything we do to maintain, contain and repair our ‘world’ so that we can live in it as well as possible” . This includes humans managing multilayered and intersecting relationships with themselves, others, states, institutions, the physical world and technologies.
According to Luka and Millette (2018), contemporary data ethics of care emerged in the 1980s from “second- and third wave feminist psychology, sociology, and cultural studies”, emphasising the importance of integrating “feminist and intersectional values into considerations of data analyses” . Intersectionality is a “heuristic term”  that focuses attention on social, identity and ideological forces that affect and legitimize power (Crenshaw, 2017). An intersectional approach challenges single-axis thinking; for example, one that solely focuses on gender without considering how multiple structural forces such as race, gender and class can intersect for differently situated individuals (Crenshaw, 1990). This is important because social categories are “always permeated by other categories, fluid and changing, always in the process of creating and being created by dynamics of power” . In an age of ever-increasing datafication, data is an important source of and vehicle for the exercise of power (Hasselbalch, 2021), which leads us to propose a data ethics of care approach for machine learning that integrates feminist and intersectional values. As illustrated in Figure 1, our framework comprises two overarching principles and a five-pronged praxis that we argue should guide attempts to develop, maintain and repair (Elish and Watkins, 2020) machine learning technology.
Figure 1: A feminist data ethics of care praxis for machine learning.
Our first principle is that machine learning, like other types of AI, is “partial, situated and contextual” . It is now widely recognised that the data from which machines learn and produce information are not neutral (O’Neil, 2016). While machine learning technology is meant to think like “us”, it is in fact predicated on the world views, experiences and assumptions of its creators who are situated, subjective and often discretionary actors (Broad, 2018). Within the machine learning pipeline, “human expertise intervenes between raw data and the analysis, crucially shaping the data, the choice of analysis, and in some cases the truth claims associated with the analysis” (Muller, et al., 2019). Critically, practitioners involved in developing machine learning technologies make “data decisions” that can have negative downstream effects in high stakes domains, often with “outsized effects on vulnerable communities and contexts” . Take, for example, skin cancer detection models that work less effectively for people with darker skin tones (Kinyanjui, et al., 2020; see generally Buolamwini, 2017).
When analysing machine learning systems, we advance Haraway’s (1988) “view from a body, always a complex, contradictory, structuring, and structured body, versus the view from above, from nowhere, from simplicity” . This view is particularly important in the context of AI because it challenges the myth of disembodied objectivity, something the technology industry has long promoted in conjunction with notions of detached, impartial, mechanical and smart automated systems (Broad, 2018). This is not to reinforce an objective-subjective binary but rather to underline the fact that knowledge produced by machine learning systems is positional — derived from real and often limited human experiences — and subject to discretionary decision-making. The machine learning pipeline relies upon inputs of data created by humans — increasingly involving labour from low paid workforces in developing nations — and, to varying degrees, this data can embody the views, biases and ideologies of its creators (Royer, 2021). An example is when individuals or teams assign binary gender labels to people who may not identify as such (Access Now, 2021). Humans also exercise agency when designing and deploying machine learning enabled systems. For these reasons, machine learning technology is always situated and contextual, rather than objective and impartial.
This leads to our second feminist data ethics of care principle that machine learning can produce a range of representational and allocative harms at all stages of the pipeline. As Barocas and Selbst (2016) explain, representational harm refers to choices that consciously or unconsciously diminish particular identities through biased representations in data. Historical bias occurs when data reflects and fails to correct for structural inequities, such as when a dataset that captures low female representation in leadership is used to reinforce an unfounded heteropatriarchal belief that men are better leaders than women (Roselli, et al., 2019). Representation bias can occur when training datasets under or over represent certain groups of individuals: for example, racist and sexist representations of black women and girls in search engines (Noble, 2018). Measurement bias can occur when machine learning models select or measure features in a dataset to determine outcomes that are discriminatory (Mehrabi, et al., 2021). For instance, when a model uses race as a proxy for recidivism in law enforcement without accounting for social factors, such as the over-policing of certain communities.
Representational harms can lead to allocative harms and distributive injustices, including an over or under distribution of resources and opportunities to a particular group based on irrelevant personal characteristics such as race, gender, and sexual orientation (Lloyd, 2018). For example, some facial analysis or body scanning systems have been trained using “pale male” and binary-gender datasets that can lead to airport security systems singling out, or working less optimally for, bodies that do not align with white, heteronormative standards . These biases are important issues of concern as machine learning models are increasingly used to support decision-making in high-stakes contexts, including education, banking, employment and housing (AI Now Institute, New York University, 2018).
It is not enough, however, for us to simply identify and discuss these harms as “principles alone cannot guarantee ethical AI” . AI ethics work must form the foundation of affirmative action. We therefore advance a feminist data ethics of care praxis, based on the foregoing principles, aimed at helping actors in the machine learning economy to recognise the situated nature of machine learning and, critically, to empower them to act to prevent harm. By “praxis”, we mean ways to enact feminist data ethics of care in practice, as part of a “continuous loop of ethical research, social action/practice, and critical in-the-moment reflection or reflexivity as well as post-action evaluation” . The five components of our praxis all work towards harm prevention: (i) ensuring diverse representation and participation in the machine learning economy; (ii) critically evaluating positionality; (iii) centring human subjects at every stage of the machine learning pipeline; (iv) implementing transparency and accountability measures; and (v) equitably distributing the responsibility for operationalising a feminist data ethics of care praxis. We discuss each of these in turn.
The first component of our praxis is concerned with the extent to which diverse populations are involved in the machine learning economy and empowered to act in the interests of those represented. A major cause for concern is that STEM (science, technology, engineering, mathematics) disciplines lack diversity (Women in AI Ethics™, 2020) and, more specifically, the development of AI systems are led by a “homogenous tribe”  of mostly affluent, highly educated, white American or Asian men (Chang, 2019). Some technology companies have attempted to promote diversity and inclusion through workplace training, employment quotas and other measures (Chang, 2019). However, as Webb (2020) points out, “talking about diversity — asking for forgiveness and promising to do better — isn’t the same thing as addressing diversity within the databases, algorithms, and frameworks that make up the AI ecosystem” . The ‘leaky pipeline’ — a metaphor often used to describe the disappearance of women from STEM degrees and careers — and Silicon Valley’s “bro culture” further entrench the problem (Chang, 2019).
A related concern is that many Western technology companies promote a “postfeminist sensibility”  that can make feminism seem like “second nature and therefore also unnecessary” . A key driver of this sensibility is the dominant individualistic ideology of the West, including discourses of individual freedom, choice, empowerment and responsibility, which can give the false impression that the work of feminism is done. The reality is that we have not yet achieved gender parity: the outlook for “closing the global gender gap has increased by a generation from 99.5 years to 135.6 years” since the start of the COVID-19 pandemic (World Economic Forum, 2021). Of the 156 countries included in the Global Gender Gap Index, “women represent only 26.1% of some 35,500 parliament seats and just 22.6% of over 3,400 ministers worldwide”, which underlines the significantly limited extent that women exercise governing power (World Economic Forum, 2021). In Silicon Valley, in particular, representation of women on boards and in executive positions remains low (Bell and Belt, 2019). The danger of leaving the design and development of technologies that structure society to an elite, homogenous group (Firestone, 1970) is that they are unlikely to be fully attuned to the conditions of racism, sexism, classism and other intersecting oppressions. Addressing intersectional oppression arguably requires a shift from neoliberal individualism to a feminist consciousness and solidarity that challenges the institutional and cultural structures of power that affect the lives of everyday people.
The second component of our praxis involves actors in the machine learning economy critically examining their positionality and that of others. We might ask, for instance, who is collecting, cleaning, sorting, processing and reviewing data along the machine learning pipeline and what biases could these decision-makers introduce or reproduce. These are important lines of inquiry because, as we explained in the context of our first overarching principle, data are never “natural” or “raw” renderings of reality (see, e.g., Gitelman, 2013). Understanding positionality in a feminist context is also important given the “political whiteness”  of mainstream or “popular” (Banet-Weiser, et al., 2020) feminisms that can result in societal actors co-opting movements to the detriment of minority groups (see, eg, the discussion of #MeToo in Phipps, 2020). Women of colour, among others, continue to call for non-Western and intersectional perspectives that examine how gender inequality intersects with other categories of social difference or identity, such as race, ethnicity, class, nationality, immigration, sexuality, age and disability, to produce structural and systemic forms of inequality (see e.g., Cho, et al., 2013; Collins, 2019; Crenshaw, 2017). By adopting intersectional approaches to positionality, we can map the machine learning economy in ways that do not gloss over cultural, social and other differences.
The third component of our feminist data ethics of care praxis involves centring human subjects at every stage of the machine learning pipeline. The rhetoric of automation and big data is seductive: it promises, for instance, efficient, instantaneous and infinite knowledge and to tackle, or even to solve, the world’s most pressing challenges (Mayer-Schönberger and Cukier, 2013). Not only is this technologically deterministic rhetoric optimistic at best, it can also promote the idea of disembodied data subjects, which decentres humans and their diverse concerns, lived experiences and power relations (Leurs, 2017). We argue for a human-centric approach that involves care for humans, particularly marginalised groups and individuals who are often disproportionately affected by machine learning gone wrong (Neff, 2020), and non-humans, such as animals and the natural environment (Merchant, 1980). This approach is important given the socio-technical nature of machine learning: that is, society shapes the design, development and deployment of technology, just as technology shapes society, including norms about the place of different people in the world (Elish and Watkins, 2020). Proponents of a feminist data ethics of care must be aware of the mutually constitutive relationship between technology and society and the potential for both to bring about change. In practice, actors in the machine learning economy can engage with community stakeholders in the data collection, development and deployment of machine learning technologies, bringing into focus the human and natural context and impact of their work at each stage of the machine learning pipeline. There should also be an emphasis on the individuals, teams and executives who make decisions throughout the machine learning pipeline — to avoid an overly objective or disembodied view of machine learning as a purely technical phenomenon.
Transparency and accountability measures are also integral to our praxis. Transparency, at its core, is “concerned with the quality of being clear, obvious and understandable without doubt or ambiguity” (Belgium v Commission, 2005). Transparency should be prioritised throughout the pipeline, from the aims of a model to data provenance, analysis and deployment, and the documentation of day-to-day decision-making by coders and developers, including select coding languages. Developers might seek to mitigate harm through algorithmic impact assessments or in-depth risk-benefit assessments, interdisciplinary collaborations with researchers and other subject matter experts, and community stakeholder engagement. To move beyond the mere provision of information, it is vital that machine learning actors also implement accountability measures, including standards against which conduct and outcomes can be evaluated, to hold those who exercise power to account (Suzor, et al., 2019). These measures, among others outlined in Section 4, can help inform and improve societal understandings of the data and systems from which humans and machines generate meaning.
The fifth component of our feminist data ethics of care praxis is the equitable responsibility for and distribution of work. This starts with consideration of how, historically, paid work and care work (“work/care regimes”) have been organised along gendered lines (Raghuram, 2019). While the gendering of work is “situational and dynamic” , historical power relations continue to influence contemporary work/care regimes in which “caring” is feminised and women often undertake a disproportionate amount of care and unpaid domestic work, as distinct from men’s largely paid work (Fisher and Tronto, 1990; Jordan, 2020). Even when women undertake paid work, they often face a “‘double shift’ — the combination of a paid job and unpaid domestic work” . This inequality is well-documented: for example, the United Nations found that in 2019 women spent “three times as many hours as men each day in unpaid care and domestic work” . Our purpose in highlighting the often-gendered ordering of work/care regimes is to warn of the risk of women, and non-binary and transgender people, along with those who identify as BIPOC , undertaking a disproportionate amount of feminist data ethics of care work. As such, we argue that key to the widespread adoption of our proposed approach throughout the machine learning economy is the explicit and equitable assignment of responsibility to all stakeholders. Equitable distribution in this context requires that responsibilities for achieving feminist data ethics of care ends is integrated into day-to-day work practices, across teams and from the least to most senior employees, rather than siloed in standalone teams or projects. Before exploring how this might be achieved in practice, in Section 4, we first evaluate the emerging conventions of mainstream AI ethics content against our feminist data ethics of care framework.
3. Who: Humans in the machine learning economy
Despite an abundance of AI ethics content, there is little evidence to suggest that it has had a tangible impact on machine learning in practice (Hagendorff, 2020). Several convergent factors might account for this apparent lack of efficacy. First, the field of AI ethics is highly fragmented, with AI ethics content taking various forms including principles, frameworks, charters, manifestos, statements and guidelines (Jobin, et al., 2019; Mittelstadt, 2019). AI ethics content is also produced by a range of actors including government agencies, private companies, academic institutions and civil society organisations (Mittelstadt, 2019). These actors sometimes disagree about ethical theories, as well as whose interests should be served by ethical principles, how and to what ends (Mittelstadt, 2019). This means there is wide scope for stakeholders to prioritise different normative assumptions, interests and practical requirements across public and private contexts. Critically, AI ethics content is almost always voluntary and without legally binding obligations (Jørgensen, 2017), which leaves it open to varying levels of adoption. Thus, while AI ethics content might be plentiful, it lacks precision, remains open to debate and does little to compel participants in the machine learning economy to act. Indeed, Wagner (2018) warns of “ethics washing”: the practice of making non-binding commitments to ethics as a way to be shielded from public regulation, while eliminating the need for substantive action. A lack of transparency within the machine learning economy also makes it difficult to evaluate whether commitments to AI ethics have extended from theory to practice.
The apparent lack of efficacy of AI ethics content might also be explained by the types of principles that tend to be included in these documents. In Jobin, et al.’s (2019) content analysis of 84 AI ethics documents, the principles transparency, justice and fairness, nonmaleficence, and responsibility were the most common. This set of principles appears to align fairly well with our feminist data ethics of care praxis at face value. Yet, on closer examination, it falls short both conceptually and pragmatically. For example, the principle of transparency, as articulated in AI ethics content, tends to be defined in terms of interpretability of data and “explainability” of AI enabled decision-making (Jobin, et al., 2019). There are calls for increased disclosure of algorithms and data, access to source code and capacity for third party audits. But the question of who transparency is for is largely unaddressed. If transparency is conceived in terms of access to data and technical systems, there is a risk that the utility of transparency will largely accrue to a heteropatriarchal class of technically proficient data scientists. Meaningful transparency must extend beyond technical access and should involve the disclosure of decision-making and outcomes throughout the machine learning pipeline for evaluation by a diverse range of actors.
A feminist data ethics of care approach to transparency might involve systematically documenting and analysing the positionality of the people involved in the design, develop and deployment of machine learning technology. The question of who is making decisions matters for understanding the potential biases and limitations that decision-makers embed within machine learning enabled systems. In addition to improving capacity for predicting and preventing harm, by evaluating the positionality of individual decision-makers, machine learning designers and deployers may also have an opportunity to improve the diversity of people making decisions throughout the pipeline. Within a feminist data ethics of care framework, transparency is not simply a technical issue, but one that aims to: identify opportunities for improving workplace diversity; identify real decision-makers in the machine learning pipeline to understand the potential biases they are at risk of embedding; and hold real actors to account (Hasselbalch, 2021).
While the principle of responsibility is common to many AI ethics documents, it tends to be poorly defined and often includes abstract discussions that pose ontological questions, such as whether AI can itself be a legal subject and therefore held accountable for automated decisions (Jobin, et al., 2019). These abstractions arguably take a highly disembodied and objective view of machine learning that prevents the proper assignment of responsibility to real decision-makers. Indeed, Terzis (2020) argues that AI ethics is “a genuine fallacy” on the grounds that it fails to address “one of the most fundamental issues in the field of advanced computational technology: the freedom and the subjectivity of all the agents involved, be it the CEO of a tech-giant, the project manager, the business analyst, the developer or the micro-worker” . As we have argued, achieving and maintaining feminist data ethics of care requires an equitable distribution of work to all stakeholders in the machine learning economy, from micro-workers to chief executive officers. When responsibility for implementing ethical principles within an organisation is not widespread, the organisation’s ethical, technical and economic objectives may be less likely to converge. For example, at Google in 2021, Timnit Gebru, a high-profile leader of an AI ethics research group within the company, was asked by Google executives to retract a research paper she authored that warned of problems with a new AI system (Simonite, 2021). Google was, at the time of this controversy, using the system as part of its search engine (Simonite, 2021). Gebru ultimately left Google with both parties embroiled in a high-profile public dispute and seemingly little to no reconciliation of the ethical and commercial imperatives at stake.
The propensity for abstraction over widespread and grounded responsibility is evident in the principle of nonmaleficence which is also common to AI ethics content. The concept of nonmaleficence is typically used to articulate the idea that AI should not cause harm, and to raise concerns for individuals’ safety, security, and privacy, and potential discrimination (Jobin, et al., 2019). The problem is that concepts of discrimination and harm tend to be presented as abstractions and detached from real actors, or actions, within the machine learning economy (Terzis, 2020). In particular, the question of who is causing harm is rarely addressed and so responsibility for harm prevention is rarely properly assigned. As we contend in our feminist data ethics of care praxis, harm prevention should be intersectional, requiring a multifaceted and systemic approach grounded in the lived experiences of both the subjects of machine learning technologies and individuals working in the machine learning economy.
In AI ethics content, calls for justice and fairness are typically framed around the prevention of bias and discrimination in the use of AI-enabled systems. The values of diversity, inclusion and equality are often evoked in these discussions, yet they tend to be conceived narrowly as individual rights. For example, AI ethics content often calls for the protection of individual rights to due process, explanation, appeal, redress and remedy (Jobin, et al., 2019). While these rights are supportive of a democratic system of governance, a limitation of rights-based approaches to ethics is that they can be founded on an erroneous assumption that all people are equally capable of exercising rights, while failing to consider limits to accessing or asserting rights that stem from social, economic and other intersectional inequities (Miller and Redhead, 2019). Similarly, a rights-based approach to justice risks placing the responsibility for responding to harm (a form of repair work) onto those subjected to harms (Kapoor, 2019). It can also lead to a focus on “discrete ‘bad actors’”  and on individual cases of unfairness or discrimination rather than a wholistic view of systemic inequality that can arise from a confluence of sources operating over time. For example, a rights-based framework arguably offers little to those who may wish to prevent a machine learning enabled word association system linking “homemaker” with women and “computer programmer” with men (Bolukbasi, et al., 2016). In practice, the existence of rights alone is not sufficient to ensure just and equitable outcomes in the machine learning economy.
To undertake a feminist ethics of care approach to justice, once again we argue that the responsibility for ensuring fair outcomes must be assigned to real people and real organisations at all stages in the machine learning pipeline. Agency should not be assigned to a disembodied machine but to specific people and organisations in various parts of the machine learning economy. Ensuring diverse representation across the machine learning workforce, evaluating positionality, engaging with communities and individuals throughout the design process, and the assignment of specific responsibilities to real people, are all necessary for feminist data ethics of care outcomes. When this does not occur, and performance of a model or technical system is the only evaluative criteria of concern, it can lead to “a focus on short-term quantities (at the expense of long-term concerns)” . For example, “short term incentives”  (e.g., quarterly earnings) have arguably led to YouTube’s algorithms promoting white supremacy and other hateful content at various points in time (Riberio, et al., 2019). Our feminist data ethics of care framework requires thoughtful engagement with the spectrum of machine learning practices and widely distributing the responsibility for harm mitigation.
The focus on individual rights and tendency for abstraction in mainstream AI ethics content might be explained by a dominance of male voices in AI discourses. As Hagendorff (2020) describes, “no different from other parts of AI research, the discourse on AI ethics is also primarily shaped by men” . This means that the principles embodied in AI ethics content are often written from and reinforce a patriarchal viewpoint, and are rarely informed by intersectional understandings of power imbalances and experiences of systemic discrimination. Interestingly, Hagendorff’s (2020) meta-analysis of AI ethics content found that content authored by women-led organisations was more likely to present AI “within a larger network of social and ecological dependencies and relationships ... corresponding most closely with the ideas and tenets of an ethics of care” . In contrast, some male authored AI ethics content tends to treat ethical issues as technical problems in need of technical solutions, which perhaps explains why Google’s main solution to its machine vision algorithm classifying black people as gorillas was blocking the image recognition system from identifying gorillas (Vincent, 2018). In Hagendorff’s study, male authored ethical frameworks tend to be disconnected from their wider social and political context and failed to address “AI in context of care, nurture, help, welfare, social responsibility or ecological networks”  — that is, the socio-technical nature of AI.
Of course, abstraction in AI ethics content may also arise through necessity. When seeking to capture a range of contested ideas about complex socio-technical systems into a neat organising statement or list, specificity is likely to lose out to abstraction (Terzis, 2020). However, this speaks to the limitations of such an approach to AI ethics, rather than to the merits of abstraction. Abstraction is a fundamental problem for the field of machine learning ethics because it limits the potential for assigning responsibility to real actors. To have effect, the field AI ethics must move beyond organising statements, and the responsibility for addressing and preventing harm must be more equitably distributed. Under current conditions, in which the field of data science is dominated by a heteropatriarchal class of men (see generally Chang, 2019), a more equitable distribution of the responsibility for operationalising a feminist data ethics of care praxis in the machine learning economy depends on everyone — women, men, and non-binary and transgender people — undertaking more ethics work in their day-to-day practices. To this end, as we outline next, there are a range of solutions and policies that are available now for stakeholders to implement.
4. How: Operationalising feminist ethics of care for machine learning
Thus far we have argued that a missing link between AI ethics principles and applied AI ethics is assigned responsibility to real actors within the machine learning economy. In this Section, we aim to situate our feminist data ethics of care praxis firmly within the current machine learning economy, and to outline the range of technical tools and systems that are currently available for operationalising feminist data ethics of care (Hagendorff, 2020). We ultimately sketch a practical guide for matching existing methods, those underpinned by the principles in our praxis, with specific organisations and actors.
The first step towards achieving a feminist data ethics of care in practice is reconceptualising AI as an industry rather than an amorphous abstraction. Terzis (2020) explains:
There is no ‘ethical AI’, unless there is an ‘ethical’ supply chain, clean from conflict materials; an ‘ethical’ UX/UI design, free from manipulative dark patterns; an ‘ethical’ workforce, comprised of ethical micro-workers; an ‘ethical’ decision-making process, conditioned on the parameter of climate justice; and eventually, an ‘ethical’ reconsideration of the ‘data-driven’ necessity. 
The practice of ethics, in a broad sense, requires that organisations that provide machine learning infrastructure, and individuals who participate in the development and deployment of machine learning technologies, are held responsible for the individual and societal impacts of their industry. As Neff (2020) warns, the globally distributed nature of the machine learning economy which features “long global supply chains of AI systems — from data labelling work to engineering work to the front-line use of dashboard systems” can “mask the opportunities that people have for intervening in the systems, making it hard for people to contest their results, and blur the lines of accountability and responsibility” . To help demystify this economy, in Table 1, we suggest the machine learning economy can be divided into three broad sectors: machine learning infrastructure, machine learning development and machine learning deployment. These sectors are not mutually exclusive or exhaustive, but the divisions are helpful for identifying and assigning responsibility to different actors within the economy and, ultimately, the different actions they might undertake to operationalise a feminist data ethics of care. This taxonomy offers a starting point for identifying the “social, structural and institutional configurations that enable and constrain individual action” in order to identify “pathways for influencing the design, application, modification and use of responsible AI technologies” .
Table 1: Who and what in the machine learning economy. Infrastructure Development Deployment WHO is involved? Infrastructure providers, e.g., Google, Amazon, OpenAI Data analytics, software development, consultants, and business services End users (organisations and individuals) Data scientists, data engineers, company executives, micro-workers Data scientists, software developers, company executives Private companies, government agencies, universities WHAT are they doing? Providing machine learning infrastructure including pre-trained models and cloud storage Providing machine learning “solutions” to businesses and governments Using ML enabled tools and systems for automated decision-making and/or to gain insights for human decision-making
The second step towards a feminist ethics of care in practice is a change in consciousness of participants in the machine learning economy that empowers them to be agents of action. For data scientists, engineers, software developers and company executives to perceive themselves responsible for feminist data ethics of care work, they must first perceive themselves as autonomous decision-makers. This requires a “business ecosystem where individuals will enjoy a zone of autonomy and will be incentivized to understand and embrace full responsibility” . Terzis (2020) explains:
an alternative framework for building ethical AI would be founded on the subjectivity of the individuals involved in the development process. This framework, free from ‘objectively’ ethical criteria, will take the form of a reflective process during which the individuals will obtain a clear understanding of their freedom and responsibility. 
Systemic ethical practice requires action from autonomous individuals working within the machine learning economy who share a common social consciousness and responsibility for preventing harm to everyday users. This will require organisational cultures that are supportive of equitably distributed responsibility for operationalising a feminist data ethics of care praxis. Khurana (2020) suggests giving workers the “opportunity to develop the right habits and frames of mind”  through regular workplace exercises that involve complex ethical problem solving, unstructured time for reflection, and diverse interdisciplinary work environments. Another intervention is what Vakkuri, et al. (2020) call an “ethical card deck” in which “cards pose questions to the developers and answering these questions necessitates ethical consideration from the developers. Using the cards produces transparency by producing documentation, especially related to the development process” . New roles such as chief social work officer may be used to lead the integration of “social work thinking” into processes of machine learning development . This might involve community consultation and impact assessments for better contextualising machine learning technologies (Patton, 2020).
The inclusion of communities affected by machine learning through participatory and co-design methods (Katell, et al., 2020) is another straightforward practical intervention available to participants in the machine learning economy. As we have argued, community engagement is important for achieving outcomes that “address problems in their situated context and re-centre power with those most disparately affected by the harms of algorithmic systems” . Importantly, to ensure an equitable distribution of feminist data ethics of care work, community engagement must be commonplace and widespread within an organisation and not relegated to teams working exclusively on ethical problems. They must be integrated into design and deployment projects from the very beginning.
Once real people working within the machine learning economy are empowered, or motivated, to be responsible decision-makers, they have available to them a range of technical solutions for implementing feminist data ethics of care. As Morley, et al. (2020) point out: “how to apply ethics to the development of machine learning is an open question that can be solved in a multitude of different ways at different scales and in different contexts” . Following Morley, et al. (2020), rather than attempting to assert objective and static standards for undertaking a feminist data ethics of care praxis, we argue there should always be a focus on mixed methods. People working within the machine learning economy should take up a range of solutions that are available to them now, but they should also undertake active reflection and iteration, always seeking to change culture and build capacity throughout the machine learning economy.
More specifically, there are toolkits for detecting, mitigating and auditing discrimination and bias, such as FAT ML, XAI community, AI Fairness 360 toolkit, What-If Tool, Facets, fairlern.py and Fairness Flow. Another is what Gebru, et al. (2020) call “datasheets for datasets” for documenting the “motivation, composition, collection process, recommended uses, and so on” of datasets used in the machine learning economy. Datasheets can be “implemented easily and concretely” , help to address the lack of widely applied industry standards for documenting machine learning datasets, and potentially improve “transparency and accountability within the machine learning community” . Gebru, et al. (2020) argue that datasheets also have the potential to “mitigate unwanted biases in machine learning systems, facilitate greater reproducibility of machine learning results, and help researchers and practitioners select more appropriate datasets for their chosen tasks” . As datasheets are meant be completed by real people (and not automated systems), they might also provide an opportunity for teams to undertake and document positionality and community engagement, with a view to anticipating potential biases and preventing harmful outcomes.
Overall, to undertake our praxis, a broad range of actors within the machine learning economy must be enabled to develop a social consciousness supportive of feminist data ethics of care work (Poursabzi-Sangdeh, et al., 2018). In Table 2, we map our feminist data ethics of care framework onto the sectors of the machine learning economy identified above, and we connect them to methods for operationalising the components of our praxis. This roadmap shows that a mix of technical, social and cultural interventions may be combined to enact a feminist data ethics of care in practice, across the machine learning economy at large.
Table 2: A practical guide to a feminist ethics of care in the machine learning economy. Infrastructure Development Deployment WHO is involved? Infrastructure providers, e.g., Google, Amazon, OpenAI Data analytics, software development, consultants, and business services End users (organisations and individuals) Data scientists, data engineers, company executives, micro-workers Data scientists, software developers, company executives Private companies, government agencies, universities WHAT are they doing? Providing machine learning infrastructure including pre-trained models and cloud storage Providing machine learning “solutions” to businesses and governments Using ML enabled tools and systems for automated decision-making and/or to gain insights for human decision-making HOW can they enact a feminist data ethics of care for machine learning? PRINCIPLES
(i) ensuring diverse representation and participation
(ii) critically evaluating positionality
(iii) centring human subjects at every stage of the pipeline
(iv) implementing iterative transparency and accountability measures
(v) equitable distribution of care work
PRACTICES Diversity within teams and in executive positions through recruitment, training, and other measures to address the ‘leaky pipeline’
Regular workplace ethics exercises including positionality evaluation and documentation, and unstructured time for reflection
Chief Social Work Officer and other ethical leadership
Datasheets and other reporting standards aimed at documenting decisions, anticipating biases and preventing harmful outcomes
Bias and discrimination detection and mitigation toolkits, impact assessments, and in-depth risk benefit analyses including through collaboration with subject matter experts
Community consultation and participation in data collection and system design
Diversity within teams and in executive positions through recruitment, training, and other measures to address the ‘leaky pipeline’
Regular workplace ethics exercises including positionality evaluation and documentation, and unstructured time for reflection
Bias and discrimination detection and mitigation toolkits, impact assessments, and in-depth risk benefit analyses including through collaboration with subject matter experts
Diversity within teams and leadership
Community consultation and participation in data collection and system design
Bias and discrimination detection and mitigation practices including impact assessments and in-depth risk benefit analyses including through collaboration with subject matter experts
Independent and transparent auditing
5. Conclusion and future research
This article conceptualised and provided an initial roadmap for operationalising a feminist data ethics of care for the machine learning economy. After evaluating select AI ethics content against our feminist data ethics of care framework, we identified a propensity for abstraction over grounded responsibility that often leaves the question of who is causing harm unaddressed and the responsibility for harm prevention unassigned. This is concerning given that in mainstream AI ethics conversations, limited attention has been paid to the significance of gender power relations, including the often-gendered nature of “care work” and how it might correlate with an unequal distribution of the responsibility for operationalising ethical principles in the machine learning economy. For those seeking to address this problem, we identified a range of practical methods and other interventions that are available to operationalise, at least in part, our feminist data ethics of care framework.
The theory and practice of feminist data ethics of care for machine learning provides fertile ground for future research. There is a particular need for work that further develops the ‘how’ of operationalising a feminist data ethics of care, including built-for-purpose tools, methods and techniques for all stages of the machine learning pipeline. Given that this technology is constantly evolving, and its applications change over time, there is also a need for research that conceptualises and measures harm. When attempting to prevent and address intersectional harms through a feminist data ethics of care framework, an equitable distribution of responsibility for this work will be critical in all contexts, from machine learning to the AI economy more broadly.
About the authors
Dr. Joanne Gray is a Lecturer in the School of Communication at the Queensland University of Technology and a Chief Investigator at the Digital Media Research Centre. Her research focuses on platform policy and governance, including the exercise of private power through automated technologies.
E-mail: joanne [dot] e [dot] gray [at] qut [dot] edu [dot] au
Dr. Alice Witt is a Postdoctoral Research Fellow in the Digital Media Research Centre, Faculty of Business and Law at the Queensland University of Technology. Her research investigates the exercise of governing power in the digital age, focusing on the intersections of regulation, technology, and gender.
E-mail: ae [dot] witt [at] qut [dot] edu [dot] au
The authors wish to acknowledge and express thanks to their colleagues at the Digital Media Research Centre, particularly Professor Dan Angus and the members of the feminist machine learning working group, who provided valuable feedback on earlier drafts of the paper. In support of this project, Dr Gray received funding from QUT’s Women in Research committee.
1. M. Cifor, et al., 2019. “Feminist data manifest — No. Why refusal,” at https://www.manifestno.com, accessed 15 November 2021.
2. A.E. Marwick, 2019, p. 310.
3. M. Cifor, et al., 2019. “Feminist data manifest — No. Why refusal,” at https://www.manifestno.com, accessed 15 November 2021.
4. Our feminist agenda is inclusive of the interests of womxn; that is, women defined broadly to include those who identify as cis-gendered, transgender and non-binary.
5. Fisher and Tronto, 1990, p. 40.
6. Luka and Millette, 2018, p. 4.
7. Cho, et al., 2013, p. 787.
8. Cho, et al., 2013, p. 795.
9. Leurs, 2017, p. 150.
10. Sambasivan, et al., 2021, p. 1.
11. Haraway, 1988, p. 589.
12. Costanza-Chock, 2020, p. 5.
13. Mittelstadt, 2019, p. 501.
14. Crowder, 2017, p. 224.
15. Webb, 2020, p. 52.
16. Webb, 2020, p. 57.
17. Gill, 2007, p. 255.
18. Baer, 2016, p. 17.
19. Phipps, 2020, p. 5.
20. Toffoletti and Starr, 2016, p. 492.
21. Toffoletti and Starr, 2016, p. 493.
22. United Nations Women, 2020, p. 2.
23. Black, Indigenous and People of Colour.
24. Terzis, 2020, p. 221.
25. Hoffmann, 2019, p. 903.
26. Thomas and Uminsky, 2020, p. 1.
27. Thomas and Uminsky, 2020, p. 4.
28. Hagendorff, 2020, p. 103.
29. Hagendorff, 2020, pp. 103–104.
30. Hagendorff, 2020, p. 103.
31. Terzis, 2020, p. 226.
32. Neff, 2020, p. 5.
34. Terzis, 2020, p. 226.
35. Terzis, 2020, p. 228.
36. Khurana, 2020, paragraph 16.
37. Vakkuri, et al., 2020, p. 42.
38. Patton, 2020, p. 86.
39. Katell, et al., 2020, pp. 46–47.
40. Morley, et al., 2020, p. 8.
41. Hagendorff, 2020, p. 111.
42. Gebru, et al., 2020, p. 2.
AI Now Institute, New York University, 2018. “Algorithmic accountability policy toolkit,” at https://ainowinstitute.org/aap-toolkit.pdf, accessed 15 November 2021.
M. Alvi, A. Zisserman and C. Nellaker, 2018. “Turning a blind eye: Explicit removal of biases and variation from deep neural network embeddings,” arXiv:1809.02169 (6 September), at https://arxiv.org/abs/1809.02169, accessed 15 November 2021.
J. Angwin, J. Larson, S. Mattu and L. Kirchner, 2016. “Machine bias,” ProPublica (23 May), at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 15 November 2021.
H. Baer, 2016. “Redoing feminism: Digital activism, body politics, and neoliberalism,” Feminist Media Studies, volume 16, number 1, pp. 17–34.
doi: https://doi.org/10.1080/14680777.2015.1093070, accessed 15 November 2021.
S. Banet-Weiser, R. Gill and C. Rottenberg, 2020. “Postfeminism, popular feminism and neoliberal feminism? Sarah Banet-Weiser, Rosalind Gill and Catherine Rottenberg in conversation,” Feminist Theory, volume 21, number 1, pp. 3–24.
doi: https://doi.org/10.1177/1464700119842555, accessed 15 November 2021.
S. Barocas and A.D. Selbst, 2016. “Big data’s disparate impact,” California Law Review, volume 104, number 3, pp. 671–732.
doi: https://doi.org/10.15779/z38bg31, accessed 15 November 2021.
C. Bassett, S. Kember and K. O’Riordan, 2020. Furious: Technological feminism and digital futures. London: Pluto Press.
doi: https://doi.org/10.2307/j.ctvs09qqf.10, accessed 15 November 2021.
Belgium v Commission, 2005. “Case C-110/03, Kingdom of Belgium v Commission of the European Communities” (14 April), at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=ecli%3AECLI%3AEU%3AC%3A2005%3A223, accessed 15 November 2021.
D. Bell and D. Belt, 2019. “Gender diversity in Silicon Valley,” Harvard Law School Forum on Corporate Governance (30 April), at https://corpgov.law.harvard.edu/2019/04/30/gender-diversity-in-silicon-valley/, accessed 15 November 2021.
J. Bentham, 1996. The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Oxford: Clarendon Press.
doi: https://doi.org/10.1093/actrade/9780198205166.book.1, accessed 15 November 2021.
R. Bivens, 2015. “Under the hood: The software in your feminist approach,” Feminist Media Studies, volume 15, number 4, pp. 714–717.
doi: https://doi.org/10.1080/14680777.2015.1053717, accessed 15 November 2021.
T. Bolukbasi, K.-W. Chang, J. Zou, V. Saligrama and A. Kalai, 2016. “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings,” arXiv:1607.06520 (21 July), at https://arxiv.org/abs/1607.06520, accessed 15 November 2021.
E. Broad, 2018. Made by humans: The AI condition. Melbourne: Melbourne University Press.
J. Buolamwini, 2017. “Gender shades: Intersectional phenotypic and demographic evaluation of face datasets and gender classifiers,” MIT Master’s thesis, at https://www.media.mit.edu/publications/full-gender-shades-thesis-17/, accessed 15 November 2021.
J.A. Bullinaria, 2005. “IAI: The roots, goals and sub-fields of AI,” at https://www.cs.bham.ac.uk/~jxb/IAI/w2.pdf, accessed 15 November 2021.
E. Chang, 2019. Brotopia: Breaking up the boys’ club of Silicon Valley. New York: Portfolio/Penguin.
S. Cho, K.W. Crenshaw and L. McCall, 2013. “Toward a field of intersectionality studies: Theory, applications, and praxis,” Signs, volume 38, number 4, pp. 785–810.
doi: https://doi.org/10.1086/669608, accessed 15 November 2021.
M. Cifor, P. Garcia, T.L. Cowan, J. Rault, T. Sutherland, A.S. Chan, J. Rode, A.L. Hoffmann, N. Salehi and L. Nakamura, 2019. “Feminist data manifest — No,” at https://www.manifestno.com, accessed 15 November 2021.
P.H. Collins, 2019. Intersectionality as critical social theory. Durham, N.C.: Duke University Press.
doi: https://doi.org/10.2307/j.ctv11hpkdj, accessed 15 November 2021.
S. Costanza-Chock, 2020. Design justice: Community-led practices to build the worlds we need. Cambridge, Mass.: MIT Press.
doi: https://doi.org/10.7551/mitpress/12255.001.0001, accessed 15 November 2021.
K. Crenshaw, 2017. On intersectionality: Essential writings. New York: New Press.
K. Crenshaw, 1990. “Mapping the margins: Intersectionality, identity politics, and violence against women of color,” Stanford Law Review, volume 43, number 6, pp. 1,241–1,300.
doi: https://doi.org/10.2307/1229039, accessed 15 November 2021.
K. Crawford, R. Dobbe, T. Dryer, G. Fried, B. Green, E. Kaziunas, A. Kak, V. Mathur, E. McElroy, A.N. Sánchez, D. Raji, J.L. Rankin, R. Richardson, J. Schultz, S.M. West and M. Whittaker, 2019. “2019 report,” AI Now, at https://ainowinstitute.org/AI_Now_2019_Report.pdf, accessed 15 November 2021.
R. Crowder, 2017. “A mindful community of praxis model for well-being in the academy,” Journal of Educational Thought (JET)/Revue de la Pensée Éducative, volume 50, numbers 2–3, pp. 216–231.
C. D’Ignazio and L.F. Klein, 2020. Data feminism. Cambridge, Mass.: MIT Press.
doi: https://doi.org/10.7551/mitpress/11805.001.0001, accessed 15 November 2021.
L. D’Olimpio, 2019. “Ethics explainer: Ethics of care,” Ethics Centre (16 May), at https://ethics.org.au/ethics-explainer-ethics-of-care/, accessed 15 November 2021.
M.C. Elish and E.A. Watkins, 2020. “Repairing innovation: A study of integrating AI in clinical care,” Data & Society, at https://datasociety.net/pubs/repairing-innovation.pdf, accessed 15 November 2021.
European Commission, 2021. “Excellence and trust in AI — Brochure” (23 February), at https://digital-strategy.ec.europa.eu/en/library/excellence-and-trust-ai-brochure, accessed 15 November 2021.
S. Firestone, 1970. The dialectic of sex: The case for feminist revolution. New York: Morrow.
B. Fisher and J. Tronto, 1990. “Toward a feminist theory of caring,” In: E.K. Abel and M.K. Nelson (editors). Circles of care: Work and identity in women’s lives. Albany: State University of New York Press, pp. 35–62.
T. Gebru, J. Morgenstern, B. Vecchione, J.W. Vaughan, H. Wallach, H. Daumé, III and K. Crawford, 2020. “Datasheets for datasets,” arXiv:1803.09010 (19 March), at http://arxiv.org/abs/1803.09010, accessed 15 November 2021.
R. Gill, 2007. Gender and the media. Cambridge: Polity.
L. Gitelman (editor), 2013. “Raw data” is an oxymoron. Cambridge, Mass.: MIT Press.
doi: https://doi.org/10.7551/mitpress/9302.001.0001, accessed 15 November 2021.
T. Hagendorff, 2020. “The ethics of AI ethics: An evaluation of guidelines,” Minds and Machines, volume 30, number 1, pp. 99–120.
doi: https://doi.org/10.1007/s11023-020-09517-8, accessed 15 November 2021.
D. Haraway, 1988. “Situated knowledges: The science question in feminism and the privilege of partial perspective,” Feminist Studies, volume 14, number 3, pp. 575–599.
doi: https://doi.org/10.2307/3178066, accessed 15 November 2021.
G. Hasselbalch, 2021. “A framework for a data interest analysis of artificial intelligence,” First Monday, volume 26, number 7, at https://firstmonday.org/article/view/11091/10168, accessed 15 November 2021.
doi: https://doi.org/10.5210/fm.v26i7.11091, accessed 15 November 2021.
N. Henry, S. Vasil and A. Witt, 2021. “Digital citizenship in a global society: A feminist approach,” Feminist Media Studies (14 June).
doi: https://doi.org/10.1080/14680777.2021.1937269, accessed 15 November 2021.
A.L. Hoffmann, 2019. “Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse,” Information, Communication & Society, volume 22, number 7, pp. 900–915.
doi: https://doi.org/10.1080/1369118X.2019.1573912, accessed 15 November 2021.
A. Jobin, M. Ienca and E. Vayena, 2019. “The global landscape of AI ethics guidelines,” Nature Machine Intelligence, volume 1, number 9 (2 September), pp. 389–399.
doi: https://doi.org/10.1038/s42256-019-0088-2, accessed 15 November 2021.
A. Jordan, 2020. “Masculinizing care? Gender, Ethics of care, and fathers’ rights groups,“ Men and Masculinities, volume 23, number 1, pp. 20–41.
doi: https://doi.org/10.1177/1097184X18776364, accessed 15 November 2021.
R.F. Jørgensen, 2017. “What platforms mean when they talk about human rights,“ Policy & Internet, volume 9, number 3, pp. 280–296.
doi: https://doi.org/10.1002/poi3.152, accessed 15 November 2021.
R. Kapoor, 2019. “What is wrong with a rights-based approach to morality?” Journal of National Law University Delhi, volume 6, number 1, pp. 1–11.
doi: https://doi.org/10.1177/2277401719870004, accessed 15 November 2021.
M. Katell, M. Young, D. Dailey, B. Herman, V. Guetler, A. Tam, C. Binz, D. Raz and P.M. Krafft, 2020. “Toward situated interventions for algorithmic equity: Lessons from the field,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 45–55.
doi: https://doi.org/10.1145/3351095.3372874, accessed 15 November 2021.
R. Khurana, 2020. “Virtues not principles,” Montreal AI Ethics Institute (17 April), at https://montrealethics.ai/virtues-not-principles/, accessed 15 November 2021.
N.M. Kinyanjui, T. Odonga, C. Cintas, N.C.F. Codella, R. Panda, P. Sattigeri and K.R. Varshney, 2020. “Fairness of classifiers across skin tones in dermatology,” In: A.L. Martel, P. Abolmaesumi, D. Stoyanov, D. Mateus,M.A. Zuluaga S.K. Zhou, D. Racoceanu and L. Joskowicz (editors). Medical image computing and computer assisted intervention — MICCAI 2020. Lecture Notes in Computer Science, volume 12266. Cham, Switzerland: Springer, pp. 320–329.
doi: https://doi.org/10.1007/978-3-030-59725-2_31, accessed 15 November 2021.
K. Leurs, 2017. “Feminist data studies: Using digital methods for ethical, reflexive and situated socio-cultural research,” Feminist Review, volume 115, number 1, pp. 130–154.
doi: https://doi.org/10.1057/s41305-017-0043-1, accessed 15 November 2021.
K. Lloyd, 2018. “Bias amplification in artificial intelligence systems,” arXiv:1809.07842 (20 September), at https://arxiv.org/abs/1809.07842, accessed 15 November 2021.
M.E. Luka and K. Leurs, 2020. “Feminist data studies,” In: K. Ross, I. Bachmann, V. Cardo, S. Moorti and M. Scarcelli (editors). Encyclopedia of gender, media, and communication. Hoboken, N.J.: Wiley-Blackwell.
doi: https://doi.org/10.1002/9781119429128.iegmc062, accessed 15 November 2021.
M.E. Luka and M. Millette, 2018. “(Re)framing big data: Activating situated knowledges and a feminist ethics of care in social media research,” Social Media + Society (2 May).
doi: https://doi.org/10.1177/2056305118768297, accessed 15 November 2021.
A.E. Marwick, 2019. “None of this is new (media): Feminisms in the social media age,” In: T. Oren and A.L. Press (editors). Routledge handbook of contemporary feminism. London: Routledge, pp. 309–332.
doi: https://doi.org/10.4324/9781315728346, accessed 15 November 2021.
V. Mayer-Schönberger and K. Cukier, 2013. Big data: A revolution that will transform how we live, work, and think. London: John Murray.
N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman and A. Galstyan, 2021. “A survey on bias and fairness in machine learning,” ACM Computing Surveys, volume 54, number 6, article number 115, pp 1–35.
doi: https://doi.org/10.1145/3457607, accessed 15 November 2021.
C. Merchant, 1980. The death of nature: Women, ecology, and the scientific revolution. San Francisco: Harper & Row.
J. Metcalf, E. Moss and d. boyd, 2019. “Owning ethics: Corporate logics, Silicon Valley, and the institutionalization of ethics,” Social Research, volume 86, number 2, pp. 449–476, and at https://muse.jhu.edu/article/732185, accessed 15 November 2021.
H. Miller and R. Redhead, 2019. “Beyond ‘rights-based approaches’? Employing a process and outcomes framework,” International Journal of Human Rights, volume 23, number 5, pp. 699–718.
doi: https://doi.org/10.1080/13642987.2019.1607210, accessed 15 November 2021.
B. Mittelstadt, 2019. “Principles alone cannot guarantee ethical AI,” Nature Machine Intelligence, volume 1, number 11 (4 November), pp. 501–507.
doi: https://doi.org/10.1038/s42256-019-0114-4, accessed 15 November 2021.
J. Morley, L. Floridi, K. Libby and E. Anat, 2020. “From what to how: An overview of AI ethics tools, methods and research to translate principles into practices,” Science and Engineering Ethics, volume 26, pp. 2,141–2,168.
doi: https://doi.org/10.1007/s11948-019-00165-5, accessed 15 November 2021.
M. Muller, I. Lange, D. Wang, D. Piorkowski, J. Tsay, Q. Vera Liao, C. Dugan and T. Erickson, 2019. “How data science workers work with data: Discovery, capture, curation, design, creation,” CHI ’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, paper number 126, pp. 1–15.
doi: https://doi.org/10.1145/3290605.3300356, accessed 15 November 2021.
G. Neff, 2020. “From bad users and failed uses to responsible technologies: A call to expand the AI ethics toolkit,” AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 5–6.
doi: https://doi.org/10.1145/3375627.3377141, accessed 15 November 2021.
S.U. Noble, 2018. Algorithms of oppression: How search engines reinforce racism. New York: New York University Press.
C. O’Neil, 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown.
Organisation for Economic Co-operation and Development (OECD), 2018. “Bridging the digital gender divide: Include, upskill, innovate,” at https://www.oecd.org/digital/bridging-the-digital-gender-divide.pdf, accessed 15 November 2021.
D.U. Patton, 2020. “Social work thinking for UX and AI design,” Interactions, volume 27, number 2, pp. 86–89.
doi: https://doi.org/10.1145/3380535, accessed 15 November 2021.
A. Phipps, 2020. Me, not you: The trouble with mainstream feminism. Manchester: Manchester University Press, and at http://www.jstor.org/stable/j.ctvzgb6n6.4, accessed 15 November 2021.
F. Poursabzi-Sangdeh, D.G. Goldstein, J.M. Hofman, J. Wortman Vaughan and H. Wallach, 2018. “Manipulating and measuring model interpretability,” arXiv:1802.07810 (21 February), at https://arxiv.org/abs/1802.07810, accessed 15 November 2021.
P. Raghuram, 2019. “Race and feminist care ethics: Intersectionality as method,” Gender, Place & Culture, volume 26, number 5, pp. 613–637.
doi: https://doi.org/10.1080/0966369X.2019.1567471, accessed 15 November 2021.
M.H. Ribeiro, R. Ottoni, R. West, V.A.F. Almeida and W. Meira, 2019. “Auditing radicalization pathways on YouTube,” arXiv:1908.08313 (22 August), at https://arxiv.org/abs/1908.08313, accessed 15 November 2021.
D. Roselli, J. Matthews and N. Talagala, 2019. “Managing bias in AI,” WWW &rdsuo;19: Companion Proceedings of The 2019 World Wide Web Conference, pp. 539–544.
doi: https://doi.org/10.1145/3308560.3317590, accessed 15 November 2021.
A. Royer, 2021. “The urgent need for regulating global ghost work,” Brookings Institution (9 February), at https://www.brookings.edu/techstream/the-urgent-need-for-regulating-global-ghost-work/, accessed 15 November 2021.
N. Sambasivan, S. Kapania, H. Highfill, D. Akrong, P. Paritosh and L. Aroyom 2019. “‘Everyone wants to do the model work, not the data work’: Data cascades in High-stakes AI,” CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, article number 39, pp. 1–15.
doi: https://doi.org/10.1145/3411764.3445518, accessed 15 November 2021.
T. Simonite, 2021. “What really happened when Google ousted Timnit Gebru,” Wired, at https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/, accessed 15 November 2021.
H. Suresh and J.V. Guttag, 2021. “A framework for understanding sources of harm throughout the machine learning life cycle,” arXiv:1901.10002v4 (15 June), at http://arxiv.org/abs/1901.10002, accessed 15 November 2021.
N.P. Suzor, S.M. West, A. Quodling and J. York, 2019. “What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation,” International Journal of Communication, volume 13, at https://ijoc.org/index.php/ijoc/article/view/9736, accessed 15 November 2021.
P. Terzis, 2020. “Onward for the freedom of others: Marching beyond the AI ethics,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 220–229.
R. Thomas and D. Uminsky, 2020. “The problem with metrics is a fundamental problem for AI,” arXiv:2002.08512 (20 February), at https://arxiv.org/abs/2002.08512, accessed 15 November 2021.
K. Toffoletti and K. Starr, 2016. “Women academics and worklife balance: Gendered discourses of work and care,” Gender, Work & Organization, volume 23, number 5, pp. 489–504.
doi: https://doi.org/10.1111/gwao.12133, accessed 15 November 2021.
United Nations, 2019. “The role of the United Nations in combatting discrimination and violence against lesbian, gay, bisexual, transgender and intersex people” (20 September), at https://www.ohchr.org/Documents/Issues/Discrimination/LGBT/UN_LGBTI_summary_2019.pdf, accessed 15 November 2021.
United Nations, n.d. “Sustainable development goals: 17 goals to transform our world,” at https://www.un.org/en/exhibits/page/sdgs-17-goals-transform-world, accessed 15 November 2021.
United Nations Women, 2020. “Progress on the Sustainable Development Goals: The gender snapshot 2020,” at https://www.unwomen.org/en/digital-library/publications/2020/09/progress-on-the-sustainable-development-goals-the-gender-snapshot-2020, accessed 15 November 2021.
V. Vakkuri, K.-K. Kemell and P. Abrahamsson, 2020. “ECCOLA — A method for implementing ethically aligned AI aystems,” 2020 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 195–204.
doi: https://doi.org/10.1109/SEAA51224.2020.00043, accessed 15 November 2021.
A. Vincent, 2018. “When it comes to gorillas, Google Photos remains blind,” Wired, at https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/, accessed 15 November 2021.
B. Wagner, 2018. “Ethics as an escape from regulation. From ‘ethics-washing’ To ethics-shopping?” In: E. Bayamlioğlu, I. Baraliuc, L.A.W. Janssens and M. Hildebrandt (editors) Being profiled: Cogitas ergo sum: 10 years of profiling the European citizen. Amsterdam: Amsterdam University Press, pp. 84–89.
doi: https://doi.org/10.2307/j.ctvhrd092.18, accessed 15 November 2021.
A. Webb, 2020. Big nine: How the tech titans and their thinking machines could warp humanity. New York: PublicAffairs.
Women in AI Ethics™, 2020. “100+ brilliant women in AI ethics,” at https://100brilliantwomeninaiethics.com, accessed 15 November 2021.
World Economic Forum, 2021. “Global gender gap report 2021,” at https://www.weforum.org/reports/global-gender-gap-report-2021/digest/, accessed 15 November 2021.
Received 30 July 2021; revised 4 October 2021; accepted 15 November 2021.
This paper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
A feminist data ethics of care framework for machine learning: The what, why, who and how
by Joanne E. Gray and Alice Witt.
First Monday, Volume 26, Number 12 - 6 December 2021