First Monday

Culture by design: A data interest analysis of the European AI policy agenda by Gry Hasselbalch



Abstract
This article investigates a moment of the big data age in which artificial intelligence became a fixed point of global negotiations between different interests in data. In particular, it traces and explicates cultural positioning as an interest in the artificial intelligence momentum with an investigation of the unfolding of a European AI policy agenda on trustworthy AI in the period 2018–2019.

Contents

Introduction
The European AI agenda: Sculpting the cultural interest in AI
Culture and technological change
Making the invisible visible
Data interest analysis: The European agenda’s cultural shape of AI
Conclusion

 


 

Introduction

At the end of the first 20 years of the twenty-first century, artificial intelligence technologies (AI) [1] came to be at the center of a global public debate on policy, media and industry. From transpiring as a scientific endeavor and sci-fi curiosity, AI had transformed into socio-technical systems with rapid and broad societal adoption and consequently a fixed point of governance in the European Union. EU legislators had just implemented a momentous data protection law reform to address challenges of a big data digitalization of societies, and on a global scale, states and companies alike were carving their space with more or less aggressive data harvesting advances, while citizens were toiling to understand their own role in emerging big data technological environments. Against that background, a European AI strategy was published in early 2018 by the European Commission and further developed in policy and expert group initiatives over a two-year period with a growing emphasis on “ethical technologies” and “trustworthy AI”.

This article traces and explicates “culture” as an interest in a societal AI momentum with an analysis of the European AI policy agenda as it evolved in the period 2018–2019, focusing in particular on the work of a high-level expert group on AI set up by the European Commission to inform the AI strategy. The article’s analysis focuses on events, documents and statements that have contributed to the development of an official AI agenda in Europe and is informed by the author’s active participation as a member of the high-level group. Predominantly, the European AI agenda is examined as a component of a general process of value negotiations in a global environment. Indeed, the evolving agenda was from the outset explicitly framed as a European “third way” in what was dubbed in public discourse the “global AI race” between Europe, and the U.S. and China.

The term AI was used in public policy-making and discourse in the 2010s generally to describe the next frontier in big data society. AI was developed, designed and used by all types of societal stakeholders to make sense of large amounts of data, predict patterns, analyze risks and act on that knowledge to make decisions within politics, culture, industries, and on life trajectories. In essence, the popular use of the term came to denote a particular advanced complex design of big data systems, automated, goal-oriented, perceptive, reasoning and made powerful by complex data acquisition and processing. Thus, above all, the article investigates an institutionally framed cultural positioning as an interest in data understanding AI as complex data processing systems and data design that forms the locus of societal power dynamics. As such, it does not seek to predict the path of AI adoption as this will be shaped by a much broader sum of actors, interests and conditions, which includes the formally mitigated consequences of law, policy and institutional practice as well as the unintended outcomes of people’s (users, engineers, etc.) practices (Epstein, et al., 2016).

Theoretically, the article is grounded in a discussion of the role of culture and interests in the development and governance of socio-technical systems. It builds on conceptualizations of culture, power and technologies in cultural studies, applied ethics and science and technology studies (STS). In combination these perspectives treat technologies as dynamic concepts constantly in negotiation with human, societal and cultural factors. The understanding is that while technological artefacts may impose on humans and human societies, humans simultaneously impose on technology, and we may choose to do so with intention and direction. We create laws, policies and standards; we educate and program, hack and revolt. This is an important view on technological development and change as it empowers human governance efforts when considering the multiple human and non-human factors that shape the direction of a technological development.

 

++++++++++

The European AI agenda: Sculpting the cultural interest in AI

Since the 1980s, AI’s adoption in society has progressed from the rule-based expert systems encoded with the knowledge of human experts to systems that evolve and learn from big data in digital environments with increasingly autonomous decision-making agency and capabilities (Alpaydin, 2016). In the 2010s, socio-technical data infrastructures enhanced by AI software systems to autonomously, or semi-autonomously, perceive and interpret their environments were worldwide increasingly embedded in private and public sectors in health care, security, finance, emergency, defence, e-government, law, transportation and energy. The U.S. had been a first mover in terms of global capital investment in AI as well as in the development of an AI ecosystem, and China rapidly followed suit (Merz, 2019). In Europe, an increasing number of examples of socially challenging applications of AI from these regions had been in the public limelight, for example, the use of a biased sentencing software in the U.S. judicial system (Angwin, et al., 2016) or the mass citizen social credit scoring system of China (Kobie, 2018). But gradually the social implications of AI used in European settings were also edging into public awareness as a component of decision-making in many different sectors [2]. The European Union was, for example, proposing and adopting initiatives to establish smart border management systems and to integrate instruments for data processing and decision-making systems in asylum and immigration and law enforcement cooperation. In Europe there were also examples of experimenting with frameworks for automating detection and analysis of terrorist-related online contents and financing activities. At the same time, individual member states were toying with AI for predictive policing, public administration of benefits, tracing vulnerable children, for tax collection purposes and even for social scoring, while private sector examples included most profoundly AI in banking and insurance (Spielkamp, 2019). As such, AI had become the center of negotiations between different societal interests.

It was in this setting that the contours of an institutionally framed European AI agenda took shape as a distinctive cultural positioning with an emphasis on “ethical technologies” and “trustworthy AI”. It was spelled out in core documents and statements in a process that involved European members states, a European high-level expert group on AI, a multistakeholder forum called the European AI Alliance and the European Commission. EU decision-makers were recognizing that AI had become an area of strategic importance, transforming critical infrastructures in all the aforementioned sectors and was therefore also a driver of economic development. And the EU’s AI approach was on those grounds defined as a policy investment in ensuring Europe’s competitiveness on a global scale by, for example, increasing annual investments in AI development and research and establishing an agreement to join forces with national strategies on AI in member states. Thus, the AI agenda was also often described in this period as a response to a “global AI race” in the public media, debates and reports. The main focus here was the competition among regional players for global leadership on the resources for AI (e.g., data access), capital investment, AI technical innovation and practical and commercially viable research and education as well as “ethics” as a form of risk mitigation and regulation (Merz, 2019). Here, I propose that besides a race for resources, technological supremacy and risk mitigation, the explication of values-based cultural frameworks for AI played a key role.

The European Commission published its first communication on artificial intelligence in early 2018 (European Commission A, 2018), which was accompanied by a declaration of cooperation on artificial intelligence signed by 25 European member states (European Commission B, 2018) (which was later in 2018 concretized in a Coordinated plan on artificial intelligence, “made in Europe”, European Commission C, 2018). This first communication presented a general initial European approach to AI with a focus on cooperation among member states, multi-stakeholder initiatives, investment, research and technology development. Above all, AI was at this point described as part of a European economic strategy within a global competitive field. While it was not a core strategic element of this first communication on the topic, a values-based positioning was also offered: “The EU can lead the way in developing and using AI for good and for all, building on its values and its strengths.” (European Commission A, 2018) and a first step to address ethical concerns was made with the plan to draft a set of AI ethics guidelines.

Following this, a European high-level expert group on AI was established in June 2018, with 52 selected members consisting of individual experts and representatives from different stakeholder groups. Their mandate was to develop AI ethics guidelines and policy and investment recommendations for the EU. From the outset, the group’s work was framed in terms of a distinctive European framework. For example, at the group’s first meeting in Brussels in June 2018, a European Commission representative responded to a comment regarding Europe’s competitiveness: “AI cannot be imposed on us”, and it was concluded that “Europe must shape its own response to AI” [3].

Notably, the “European response” was already here defined in terms of what was presumed to be a set of shared European values. For example, at the same meeting, the chair introduced the core constituents of the group’s mandate and the European Commission’s expectations to the group as follows: “It is essential that Europe shapes AI to its own purpose and values, and creates a competitive environment for investment in AI” [4]. A decree that was later included in the discussions of the group and defined as the search for a distinctive European position in a global setting: “Discussion also centred on identifying the uniqueness of a European approach to AI, embedding European values, while at the same time identifying the need to operate successfully in a global context” [5].

The ethics guidelines published a year later in April 2019 were also outlined on the basis of “European values”. Values were introduced in this document with reference to the European Commission’s vision to, among others, ensure “an appropriate ethical and legal framework to strengthen European values” [6]. The key references here were the European legal frameworks, such as the Charter of Fundamental Rights and the General Data Protection Regulation. However, European values were also encompassed in one unifying ethics framework defined as the “human-centric approach” in which the individual human being’s interests prevail over other societal interests: “The common foundation that unites these rights can be understood as rooted in respect for human dignity—thereby reflecting what we describe as a ‘human-centric approach’ in which the human being enjoys a unique and inalienable moral status of primacy in the civil, political, economic and social fields” [7].

Yet it was the delineation of a specific type of technology design and culture of AI practitioners which in the end became the ethics guidelines’ unique cultural positioning. Several ethics guidelines for AI had in 2019 already been published in European member states, outside Europe and by international organizations. Most notably, only a few months after the high-level expert group’s ethics guidelines were published, 42 countries adopted an Organisation for Economic Co-operation and Development (2019) recommendation that included ethical principles for trustworthy AI. Though in comparison with other more principle-based ethics guidelines, the high-level expert group’s ethics guidelines were particularly focused on the operationalization of ethics in the design of AI, that is, on framing the practice of building AI and hence providing concrete and practical guidance to AI practitioners. Europe was consequently also described in the guidelines as a potential leader in the development of “ethical technology”, with a call to create a very specific approach to the design of AI. As such, ethics and values were considered to be a property of technological design and practice, and in addition to deployers and users of AI, in the guidelines practitioners were urged to implement and apply seven ethical requirements that were supplemented with an assessment list with concrete questions to guide AI practitioners.

During the process of developing the ethics guidelines, the title of the work changed from “Trusted AI” to “Trustworthy AI” [8]. While this might be conceived of as a change primarily in semantics, the transformation in fact built on core discussions at group meetings centered on the inherent values of AI design. Accordingly, that title mirrored the conclusion of the group discussions, which was that AI technologies should not just be trusted, the EU needed to ensure that trustworthiness was built into the “technology culture” of AI innovation. As stated in the report from the first workshop of the high-level expert group, “Trusted AI is achieved not merely through regulation, but also by putting in place a human-oriented and ethical mind-set by those dealing with AI, in each stage of the process” [9].

In this way, Trustworthy AI came into being as the European “third way” in AI innovation. This also meant that when working with the policy and investment recommendations that were published in June 2019, the high-level group proposed Trustworthy AI as a core European strategic area (HLEG B, 2019). Hence, the recommendations emphasized the leveraging of European “enablers” for Trustworthy AI, for example, by providing human-centric AI-based services for individuals, making use of public procurement to ensure trustworthy AI, integrating knowledge and awareness, updating skills among policy-makers, work forces and students, developing a research university network on AI ethics and other disciplines necessary to ensure trustworthy AI across Europe, providing legal and technical support to implement trustworthy AI and mapping legal frameworks and creating new laws where the risks were considered high (e.g., when AI is used in the context of mass citizen scoring or autonomous weapons). Even recommendations were made to develop a European AI infrastructure based on personal data control and privacy (HLEG B, 2019).

Alongside the high-level expert group’s development of a set of ethics guidelines and policy and investment recommendations on AI, the way in which a European ethics and values-based approach to AI was addressed by the European Commission also transformed from a brief “concern” in a political strategy (European Commission A, 2018) into a strategic point of positioning. Nathalie Smuha, who was the coordinator of the high-level group, has described how the work of the high-level group was quickly adopted within the European Commission’s general AI strategy (Smuha, 2019). As she explains, the European Commission at that time counted around 700 active expert groups, such as the high-level expert group on AI, that were tasked with drafting opinions or reports advising the Commission on particular subjects. However, their input was not binding, and the Commission was independent in the way it took into account the groups’ advice and expertise. For example, only rarely did they become the core topic of a Commission communication [10]. Nevertheless, when the high-level expert group presented the ethics guidelines to the Commission in March 2019, an almost immediate agreement was reached to publish the last communication in the two-year period — “Building trust in human-centric AI” (European Commission, D, 2019) that stated its support for the seven key requirements of the guideline and encouraged all stakeholders to implement them when developing, deploying or using an AI system [11]. This culminated in the promise of a new president of the European Commission, Ursula von der Leyen at the end of 2019: “In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence” [12].

 

++++++++++

Culture and technological change

How can we explain a forceful explication of cultural values as a strategic interest in the face of technological change? Early in the history of the introduction of computers in society, one of the pioneers within applied computer ethics, James H. Moor, described in his famous essay “What is computer ethics?” the policy vacuums that emerge when policies clash with technological developments that force us to “discover and make explicit what our value preferences are” [13]. He predicted that a computer revolution of society would happen in two stages marked by the questions we ask. In the first “introduction stage” we will ask the functional questions — how well does this and that technology function for its purpose? In the second “permeation stage”, when institutions and activities are transformed, we will start asking questions regarding the nature and value of things [14]. The historian of technology Thomas P. Hughes similarly detailed the general developmental phases of large evolving and expanding technological systems from invention, development, innovation, transfer and growth, to competition and consolidation (Hughes, 1987, 1983). Hughes refers to “a battle of the systems” in which an old and a new system exists at the same time in a relationship of “dialectical tension” [15]. The phase of competition and consolidation is therefore also a moment of conflict and resolution not only among engineers but also in politics and law [16]. In these moments of conflict, critical problems are exposed, different interests are negotiated and finally gathered around solutions to direct the evolution of the systems. A new system, or the transformation of the old system, then evolves out of the very problems identified and solved in this phase. Unlike Moor, Hughes does not consider these moments of explication as solely induced by the transformative character of the technological systems. He considers their negotiation in complex social spaces. In fact, he holds that technologies themselves are intertwined with social, economic and cultural problems [17]. That is; in an STS perspective on technological change, such as Hughes’, large technical systems are sociotechnical, meaning that they are not just material and technical but also represent complex power dynamics between multiple actors and societal interests. Therefore, they cannot be explained with a focus on technical innovation or the engineering of materials only, as they are integrally part of society at large.

As follows, to explain the socio-technical shape of the AI momentum of 2018–2019, we need to consider it as something more than just technically innovative, practically implementable and economically viable. We may describe it as “cultural”. To do this, we need some additional perspectives.

In a cultural studies perspective, culture is not just one facet but multifaceted — informally and formally created by and in interaction with people and artefacts — and the meaning of these cultural relations are in constant contestation and social negotiation. The founding Marxist scholar of the British cultural studies tradition, Raymond Williams, for example famously defined culture as “shapes”, a set of “purposes” and “meanings” that are expressed “in institutions, and in arts and learning” and in “ordinary” practice [18]. Accordingly, culture is “a whole way of life” [19]. It consists of prescribed dominant meanings and, more importantly, also the negotiations of these. The meaning of culture is in “(...) active debate and amendment under the pressures of experience, contact and discovery” [20], and as such it is simultaneously “traditional” and “creative”. Hence, there are two sides to culture: “(...) the known meanings and directions, which its members are trained to; the new observations and meanings, which are offered and tested.” [21]. In this perspective, culture is a site of power negotiation.

We may continue here and think of cultural power negotiations in the context of technological development and innovation. Here, culture, or the “cultural”, can be traced in the very design of technology. Hughes defines technological culture as a complex composite of socially embedded interests, goals and intentions [22]. Famously he held that technological systems do not become autonomous by themselves but require momentum, which depends on the interests (the culture) of the organizations and people invested in the system [23]. He mentions a few of these that were invested in the development of the modern electric power system that we might also recognize as stakeholders in the AI momentum of the 2010s: “Manufacturing corporations, public and private utilities, industrial and government research laboratories, investment and banking houses, sections of technical and scientific societies, departments in educational institutions, and regulatory bodies ...” [24]. He contends that differences in “technological styles” became particularly apparent in the twentieth century due to the increasing availability of “international pools of technology” (including, e.g., international trade, patent circulation, the migration of experts, technology transfer agreements and other forms of knowledge exchange) [25]. Accordingly, he argues that technological style is the language of culture, so to speak, or it is, as he says, an “adaption to environment” [26]; that is to say, culture is the sum of “systemized knowledge” created in interaction with the economic and social institutions involved.

This view is characteristic of an STS perspective on the cultural components of technology development. Here, I will not get into debates regarding culture as the epistemological weight on the scale of social constructivism and relativism or technological determinism and natural realism in studies of science and technology (e.g., as represented in the debate between Callon and Latour [1992] and Collins and Yearley [1992]). That is, although I recognize that culture is a contested concept, it is more generally in STS related to the way we get to know things and the skills and resources we use to create a technology. We might say that distinct “knowledge cultures” or “technological cultures” are the foundations of a technology’s design and adaption in society. Andrew Pickering, for example, describes culture as the resources that scientists use in their work or a shared conceptual field [27]. Harry M. Collins defines cultural skills as intents and purposes and sets of rules of action for the design of a technology [28]. They are the inexplicable or “hidden” components of technology development [29]. He also argues that these implicit cultural skills of technology practitioners transform when they are made explicit and that this transformation of skills depends on changes in a “cultural ambience” that is “enmeshed in wider social and political affairs” [30].

The concept “data cultures” can here be used to illustrate the cultural variations of the different technological “styles” of the way in which data is managed and treated in technology design. These various styles could be described as the “technological cultures” of data design based on shared skills and knowledge frameworks for data technology practitioners, implicit, for example, in ideals about the big data value for technology development [31] and also explicitly described in data protection laws or ISO standards, such as the 27701 on how to create privacy information management systems (PIMS). Thus, the very practices of data scientists and designers can be said to be framed within specific cultural systems of meaning-making and accordingly the practice of developing a data system and design a cultural practice: “shaped by ideas about the cultivation and production of data that reflect epistemologies about, for example, ordering, classification, and standards” [32]. Accordingly, we may also argue that the very data design of a technology has cultural properties that can be examined as a culturally coded system. For example, AI is not just “coded” data; it is data culture in code. As such, the AI system’s data design, or any data design, is culture in action. For example, as outlined by Collins (1987) in his description of cultural skills and AI, culture is in expert systems transformed into explicated categories, literally coded, and in advanced self-learning systems, it is even encoded within the systems when autonomous machine predictions and decisions are made; that is, the cultural classification of the world is actively coded and produced within the system.

To conclude, technology development is enmeshed in cultural spaces that can be depicted as the epicenter of interest negotiations. Notably, Hughes illustrated how each developmental phase of a technological system produces, a specific “culture of technology”, which is the sum of this complex set of interests. The technology culture is therefore, according to Hughes, also the basis of a momentum of a technological system, and, importantly, competing cultures must convert to the dominant culture of the momentum or perish (Hughes, 1987).

 

++++++++++

Making the invisible visible

Opacity was in the late 2010s often described as a core ethical challenge of the very design of AI (Burrell, 2016), on account of either intentional acts of creating obscurity with “secret algorithms” (Pasquale, 2015), unconceivable “math” (O’Neil, 2016) or permeating discursive power that concealed the interests of institutions and corporations (Zuboff, 2015). This is a core challenge that we may address here. As disparate as they may seem in their perception of the relation between culture and technological change, there is, respectively, in applied computer ethics, STS and cultural studies, a shared emphasis on the importance of making the invisible visible and explicating cultural components in order to effect change.

James H. Moor considers the “invisibility factor” [33], such as “invisible programming values” [34], a principal ethical challenge of the computer and its use per se. Collins [35] explains the move of the taken-for-granted cultural skills from inexplicable to explicable categories as a way, among others, to reduce ambiguity in knowledge and practice due to cultural and contextual distance. Hughes [36] takes a grander view when looking at the consolidation in society of larger technological systems, arguing that they do have a direction, and therefore the explication of goals is more important for a young system than for an old one.

In cultural studies and critical data studies, the explication of cultural components is coupled with the exposure of cultural power dynamics. For example, a distinct field of feminist technoscience scholars, such as Judith Butler, Donna Harraway and Sandra Harding, have raised feminist critiques of science, technology, practices and knowledge in terms of the cultural gender power dynamics they reproduce and enforce (Åsberg and Lykke, 2010). Likewise, the data scientists and feminists Catherine D’Ignazio and Laura F. Klein (2020) describe what they refer to as “oppressive” data science cultures in their book Data feminism. These, they argue, are reflected in the goals and priorities set for the very data design of the technology. For example when minority groups are either underrepresented in data used as the basis for decisions made on social benefits, when scientific critical medical analysis only benefits one privileged group, or on the other hand when a minority group is overrepresented in data that puts them at a disadvantage in society, such as e.g., data from specific city zones used for predictive policing. In these perspectives, the various meaning making cultural practices and shared taken-for-granted cultural systems that naturalize specific situated views of the world and enforce power dynamics can only be challenged if explicated.

In other words, the cultural foundation of a technological system, what we have also referred to here as its “shape”, the “knowledge culture” behind, its “technological style”, we may also see as a prioritization of inherent values of a cultural system. In an applied ethics perspective, values are, for example, described by the philosopher of technology Philip Brey as “idealized qualities or conditions in the world that people find good” [37]. These are ideals that we can work towards realizing in the design of a computer technology. Thus, technologies can have a specific cultural shape that consists of the implicit systems of organized knowledge, practices and meanings that go into their design. Values are not just personal ideals or transcendentally “true” or “good”; they are culturally situated and constantly engage with shared cultural purposes and common meanings by enforcement and/or negotiation (Williams, 1993). This is also true for our ethical thinking about digital technologies where culture, as for example Charles Ess has illustrated in his analyses of ethics, culture and technologies, plays an essential role. Accordingly, in Western societies an ethical emphasis has been placed on “the individual as the primary agent of ethical reflection and action, especially as reinforced by Western notions of individual rights” [38]. As such, culturally situated ethical thinking also has an interest in the power dynamics of society regarding who or what ethics is for.

In this line of argument, a first step to guide the development of trustworthy AI would be to make the cultural foundation (the data cultures) visible. Essentially we need to consider this explication of cultural components as an ethical and moral choice. As the information studies scholars Geoffrey C. Bowker and Susan Leigh Star [39] state in their work on classifications and standards in the development of information infrastructures, “Each standard and each category valorizes some point of view and silences another. This is not inherently a bad thing — indeed it is inescapable. But it is an ethical choice, and as such it is dangerous — not bad, but dangerous.”

 

++++++++++

Data interest analysis: The European agenda’s cultural shape of AI

I have so far examined how a European AI Agenda evolved over a two-year period into a distinctive European cultural positioning with an emphasis on “ethical technologies” and “trustworthy AI” [40]. First and foremost, I examined this as an interest in shaping a technological AI momentum. In this last part of the article, I move on to an investigation of four cultural components of this cultural interest in the data of AI as it was explicated in an institutionally framed process between 2018–2019.

The four cultural components of the European data interest in AI

As illustrated in the two-year period, a negotiation of a shared cultural framework for the development and adoption of AI took place and was above all broadly defined in terms of European values and ethics. Importantly, this also included a conceptualization of a European technology culture. I propose here that the European AI agenda sought to explicate this in four cultural components: 1. the cultural context, 2. the cultural foundation, 3. the technological data culture and 4. the cultural data space.

1. The cultural context: Defining the technological momentum

As we have learned, a technological system does not evolve autonomously, it is directed within a momentum that arises from the interests invested in the system (Hughes, 1983). The culture of a larger technological system is internal to the system in the sense that it represents the sum of the focused interests and forces at play in the momentum of this particular system. But culture is also a force external to the very system, a “cultural ambience” that is entangled in general social and political affairs [41]. Transformations in the resources, skills and knowledge that drive the development and adoption of a technological system can therefore also be influenced by changes in this “cultural ambience”.

Although AI systems had already been adopted and integrated primarily in some parts of the private sector in the late 2010s, their general adoption in European society, including in the public sector was a recent development, and in policymaking, the forceful focus on AI was new. Therefore, we may consider AI in terms of what Hughes (1983) refers to as a “young system” in society in which the explication of goals is particularly prevalent. Along these lines, the European AI agenda may equally be considered a cultural interest in shaping the technological momentum of AI systems and directing their evolution in society in Europe and globally. The high-level group’s policy and investment recommendations (HLEG B, 2019) published one year into the period in which the European AI agenda unfolded describe the different societal phases of digitalization where AI forms a “third wave” characterized by its adoption in European society: “Europe is entering the third wave of digitalization, but the adoption of AI technologies is still in its infancy. The first wave involved primarily connection and networking technology adoption, while the second wave was driven by the age of big data. The third wave is characterized by the adoption of AI which, on average, could boost growth in European economic activity by close to 20 percent by 2030. In turn, this will create a foundation for a higher quality of life, new employment opportunities, better services, as well as new and more sustainable business models and opportunities” [42].

The two-year period was characterized by a sense of urgency to gain force within a global AI momentum, and in particular the stakeholders that make a momentum were therefore a central topic of the negotiation and debate. This included, for example, a focus on AI practitioners, entrepreneurs, data analysts, educators, the work force, policy-makers and citizens in general. Not only were the stakeholder interests of the members of the high-level expert group a continuous topic of contestation in public debate, but generally a broad range of societal stakeholders were either sought out to participate in, for example, the AI alliance multistakeholder online platform created as part of the strategy and the public consultations of the high-level expert group reports, or they were addressed in the content of the reports and in presentations at various public events.

The depiction of an AI momentum was prevalent at the first public event that the high-level expert group was invited to attend [43]. Launched with a press release emphasizing the role of AI in “boosting European competitiveness” [44], the event started off with a speech by the then Commissioner for European Commission for Digital Economy and Society Maryia Gabriel who outlined the strategic goals of the European Commission with a clear message. European stakeholders could indeed shape the direction of AI: “We all have an important role to play in defining a shared European vision for Artificial Intelligence. Yes, ladies and gentlemen, digitalization is everywhere, data is everywhere. This is just the beginning of a new technological revolution.” [45]

Notably, like many others in this period, Gabriel in her speech also equated digitalization with data and at the same time described data as the foundation for AI evolution. In fact, the first European Commission Communication on AI had recognized data as a key factor for the development of AI in Europe with a reference to the creation of “data rich environments” as “AI needs vast amounts of data to be developed” (European Commission A, 2018). Thus, the main driver for AI was held to be data. As described in the high-level expert group’s policy and investment recommendations, this “third wave” of technological development was in fact “driven by the age of big data”. The EU was here consequently also described as “a pivotal player in the data economy” [46] as data “is an indispensable raw material for developing AI” [47]. Therefore, data was also held to be core to what the stakeholder interests of the AI momentum were invested in: “Ensuring that individuals and societies, industry, the public sector as well as research and academia in Europe can benefit from this strategic resource is critical, as the overwhelming majority of recent advances in AI stem from deep learning on big data” [48].

2. The cultural foundation: The values and ethics framework

Culture is a shared conceptual framework for meaning production that consists of what we know and what we are trained in. A conceptual values-based framework for personal data is, in the European context for example, formalized in legal frameworks, such as the General Data Protection Regulation and the Charter of Fundamental Rights. But culture also consists of the new meanings that are offered and contested meanings (Williams, 1993); that is to say, culture is also a cultural negotiation in which different cultures clash and conflicts of interests may emerge.

With the rise of data intensive technologies, such as AI, not only the law but also the meaning of a traditional European approach to handling personal data was challenged, and a process of cultural meaning negotiation was therefore initiated. This we may refer to as “data ethics spaces of negotiation” (Hasselbalch, 2019) that exposed the cultural contexts that were shaping the ethical thinking of this period and ultimately sought to resolve conflicts between value systems in conflict.

As described, the European AI agenda explicated a general human-centric approach that stressed that human interest prevailed over other interests as well as a particular approach to data governance that emphasized the empowerment of individuals in the handling of their personal data. For example, the high-level expert group’s ethics guidelines outlined a clear framework for the management of data with one of the seven requirements, “privacy and data governance”, specifically addressing the human-centric values embedded in the data design of an AI technology. In this context, the concept of human agency stood out as the individual’s knowledge and the information provided for the individual to make decisions and challenge automatic systems [49].

The human-centric approach came to represent the overarching framework of the European AI agenda for resolving different interests and values embedded in AI innovation. Conflicts existed between data protection/privacy, ethics and data-driven innovation, machine automation and the human work force, the interests of the individual and society/public institutions as well as scientific and governmental interests. That is most important as stated in the policy and investment recommendations: “AI is not an end in itself, but a means to enhance human well-being and flourishing” [50]. As a result, human-centric practical solutions for resolving such conflicts were suggested: ethical technology as a competitive advantage (resolving conflicts between ethics and data-driven innovation), humans-in-the-loop AI solutions for the work place and upscaling the AI skills of the work force (resolving conflicts between automation and the replacement of workers), data design as an enabler of human well-being and protection such as developing mechanisms for the protection of personal data and individuals to control and be empowered by their data (the interests of the individual and society) and generally focusing on the use of non-personal data in business to business (B2B) AI solutions rather than the personal data of B2C solutions (resolving conflicts between risks of using personal data and the data intensity of AI technology development).

3. The technological data culture: Skills, knowledge, style and resources

Technological development is not neutral. Engineers and designers develop technologies within shared knowledge cultures that form the foundation for their work. These foundational cultural frameworks can be described as “technology cultures” — shared fields of resources, implicit and/or explicit skills, experiences, methods and even tools they use when they build technologies and that therefore also contribute to the shaping of technological development.

As described previously, during the process of developing the European AI agenda, the explication of a European “technological culture” for the development of AI became an essential focal point. In this respect, the skills, the education, the methods and practices needed in the developmental phase of what was referred to as “ethical technology” was core to discussions concerning economic investments, awareness raising and policies. In fact, as previously illustrated, a European ethical design culture grew into being as the European position of the global AI momentum.

In 2019 at the first AI Alliance assembly in Brussels, Commissioner Maryia Gabriel talked about getting the “policy right”, which meant adopting and developing AI with “a decisive, yes, but” as she said. This “but” was a reference to the European risk mitigation of the ethical challenges of AI [51]. The first, she mentioned, was global competition (e.g., that the EU was several billion euros behind in terms of investments in AI), the third was “ethical and legal concerns”. In between was the second challenge, which according to Gabriel was the social impact of AI. Her suggestion was to invest in education and training, digital education plans and developing digital skills in Europe.

The European strategic investment in a particular “technology culture” of AI was also an essential focus of the high-level expert group’s policy and investment recommendations. It first and foremost came to equate a shared foundational AI knowledge culture. Europe needed to “foster understanding” and “creativity” [52] and generally “empower humans by increasing knowledge and awareness of AI” [53]. In this way, an entire section of the recommendations focused on “generating appropriate skills and education for AI.” This was not just limited to technical skills, but also “socio-cultural skills” [54]. In general, there was a key focus on the development of new skills or the updating of skills of not just engineers but also policy-makers and the general work force. This was also extended with a call to develop basic education on AI and literacy in higher and lower education and an “AI competence framework for individuals” [55]. In many instances, the term “data literacy” was here used interchangeably with the concept “digital literacy”. Markedly, the public sector was described as playing a fundamental role in the development of a Trustworthy AI “technology culture”, e.g., by fostering “responsible innovation” through public procurement.

One thing was the assumption that a particularly European “technology culture” of AI was needed for Europe to succeed in global competition. But how was this “technology culture” then explicated? Here, the assessment list of the ethics guidelines was particularly interesting as it explicated in detail concrete questions to guide the design, management and development of AI within each of the seven requirements for trustworthy AI. We saw here explained a “data culture” for AI, most explicitly in section 3 on “Privacy and data governance”. The point of departure was privacy and data protection, and thereafter we moved on to ensuring the quality and integrity of data and procedures for managing access to data.

4. The cultural data space: The infrastructure

According to Hughes (1983), technological style differs from region to region and nation to nation. He equates culture with geographically and jurisdictionally delineated spaces. But we may also add to this depiction the technological evolution of space that has challenged this very correlation between culture, geography and jurisdiction. As a consequence, culture is no longer just the asset of a nation, rooted in geography and national law, but is increasingly extended into virtual communities with “cultures” or “subcultures” delineated by symbolic borders of cultural values and ideas. At the beginning of the twenty-first century, “data cultures” had been created on the basis of an interjurisdictional digital flow of data. As such, the very “architecture” of a global data infrastructure had emerged as an interjurisdictional space challenging first and foremost European data protection/privacy values and legal frameworks. For example, looking at case law of the European Court of Human Rights (ECHR), it very early started considering the level of uncertainty that the challenges of technological progress posed to the ECHR’s territorial definition of jurisdiction in cases concerning the right to privacy [56].

In the 2010s, AI was developed primarily on the basis of an interjurisdictional and territorial global big data infrastructure. However, the revelations of embedded data asymmetries in the form of surveillance scandals, fake news and voter manipulation had provoked a European concern with foreign “data cultures” and their “data architectures”. The European AI agenda proposed an alternative European data-sharing infrastructure for AI based on a foundational values-based approach to data but which was also confined within the European jurisdiction and geographical space. In the policy and investment recommendations published, the high-level group described data infrastructures as the “basic building blocks of a society supported by AI technologies”. Data infrastructures were described as the foundation of a European AI critical public infrastructure, and therefore should be treated as such: “Consider European data-sharing infrastructures as public utility infrastructures.” Thus, the development of this European space should also be invested with a specific set of values and designed “with due consideration for privacy, inclusion and accessibility, by design” [57].

It was particularly in this description of a European AI data infrastructure and architecture that the cultural interest in data stood out. Thus, the values-based approach was also conceived of as a cultural effort to transfer European values into technological development, positioned against a “non-European” threat perceived to be pervasively embedded in technological infrastructures: “Digital dependency on non-European providers and the lack of a well-performing cloud infrastructure respecting European norms and values may bear risks regarding macroeconomic, economic and security policy considerations, putting datasets and IP at risk, stifling innovation and commercial development of hardware and computer infrastructure for connected devices (IoT) in Europe” [58].

 

++++++++++

Conclusion

As wild and unruly as it may seem, construed in a hodgepodge of complex relations, interests, symbolic meaning-making, people and artefacts, a technological momentum also has a shape — a shape that guides its direction, values, knowledge, resources and skills that form its technological architecture and its governance, adoption and reception in society. At times, this shape is more explicitly “cultural” and values-oriented than others when it is large and socially and culturally transformative for example or when it spreads on a global scale.

The global AI momentum of the 2010s was a moment like that. Big data systems empowered by AI technologies were transforming European societies, challenging what was held to be European fundamental values, and driving out an explication of what it means to do AI in the “European way”. With which values should AI be designed? Which interests should drive the development? What skills and education? What role should technology and science play in society? And could Europe even compete on those grounds? Information policy approaches were transforming from having narrow functional focuses on the digitalization of “everything” to more complex and multifaceted values-based emphases on the ethical and social implications of data technologies, including everything from legislative measures in competition, data protection, criminal and consumer protection law to research and innovation investments in “ethical technology” development and a European datasharing infrastructure.

In 2018, Europe had been going through a period of self-exploration regarding the role of big data and emerging technologies in European societies in general. Following an all-encompassing digitalization wave, the social and ethical implications were materializing with big data scandals and revelations. A recent reform of the European data protection legal framework was presented as Europe’s powerful global response to these challenges. However, a law did not seem to be a sufficient governance response by itself, and therefore a process was initiated to develop what was referred to as a European approach to what was perceived as a general AI evolution of the age of big data.

In this article, I have investigated a cultural interest in the global AI momentum—the cultural shape it took with an emphasis on “ethical technology” and “trustworthy AI” in response to global AI innovation, how it evolved in a process of public events and a high-level expert group on AI established by the European Commission and how this cultural interest took form as a data interest that was sought to be explicated in policy and investment recommendations as well as in a set of ethics guidelines. I relied on the thesis that technological development is not neutral. This also means that the culture of a technological design is not a randomly adapted technological style. It is one sum of interests, value frameworks and the negotiations of these, and if these are made visible, we can argue that the technological development of society can be shaped and chosen. The choice to do AI ethically and responsibly is not a simple one; in fact, it is as complex as the culture we are trying to shape with it.

I here want to suggest that to direct the AI momentum of the age of big data, we need an ethics concerned with the embedded data interests and powers, what I have also referred to as a “data ethics of power” (Hasselbalch, 2019). I have previously (Hasselbalch, 2019) described how European policy and decision-makers in the late 2010s were positioning themselves against a threat to European values and ethics perceived to be embedded in the big data socio-technical systems of what was named “GAFA” (acronym for the four big U.S. tech companies Google, Apple, Facebook, Amazon). Considering this “cultural ambience” (Collins, 1987), I propose that the cultural positioning of the European AI agenda may also be viewed as a data ethical choice formulated in direct response to the technological data cultures of the dominant AI technologies at that time.

In the article I focused primarily on the institutional explication of European cultural values as an interest in a technological momentum. I did not seek to predict the actual adoption and implementation of AI in Europe. Nevertheless, a few considerations and recommendations can be made concerning the implementation of a European “third way” on the global arena with an emphasis on the development of trustworthy AI and the human-centric approach. Following the two-year period that I examined in this article, the European Commission in 2020 published a comprehensive strategy on the digital, AI and data future of Europe (E, F, G). While this strategy was ambitious with respect to furthering Europe’s position in the “global AI race” by advancing a general AI uptake and taking back control of a European data resource space, the values-based European “third way” was mostly addressed in a legal compliance and requirements framework (Hasselbalch, 2020). Based on this article’s delineation of the complex factors that constitute an ethical “data culture”, we may argue that this is not enough and that an additional set of combined governance tools is needed to shape the technological AI momentum’s data cultures as “trustworthy”. For example, we need investment, innovation, research and education in “ethical technology” components and processes in specific (from human-in-the-loop features, state of the art anonymization techniques to ethical impact assessments). The ethical component of AI implementation in Europe cannot just be ancillary to the European AI uptake; an ethical data culture for Europe needs a dedicated economic, social and political investment. End of article

 

About the author

Gry Hasselbalch is co-founder of the thinkdotank DataEthics.eu established in 2015. She was a member of the European Commission’s high-level group on AI that developed ethics and policy guidelines for AI in Europe in the period 2018–2020, co-chair of the IEEE P7006 standard on personal data AI agents and was a member of the Danish Expert Group on Data Ethics that was appointed to provide the Danish government with recommendations on data ethics issues in 2018. Gry is also a Ph.D. fellow at University of Copenhagen (2017–2020). Previously, she worked for more than 10 years with youth’s online empowerment in the pan-European network Insafe of 30+ awareness centers. Since 2016, Gry has also been an ethics reviewer for the European Research Council (ERCEA) and the European Commission Horizon2020.
E-mail: mediamocracy [at] protonmail [dot] com

 

Notes

1. The definition of artificial intelligence has changed throughout history since the 1950s with the development of different scientific and social paradigms. As such, in the 2010s, the term AI still did not have one shared signification. In this article, I do not consider the “intelligence” of AI (technologically or philosophically), but use the term AI generically to address public discourse on the topic. My emphasis is on AI as automated decision-making data intensive systems that are designed to perceive their environment through acquiring data and interpreting the data to decide action to achieve a goal (HLEG A, 2019, p. 36).

2. The following examples are from the Berlin-based NGO AlgorithmWatch’s report published in 2019 that takes stock of Automated Decision-Making (ADMs) in the EU. Retrieved from https://algorithmwatch.org/en/publication/automating-society-available-now/.

3. HLEG C, p. 4.

4. HLEG C, p. 2.

5. HLEG C, p. 5.

6. HLEG A, p. 4.

7. HLEG A, p. 9.

8. Upon suggestion from the author of this paper and based on a conversation with Sille Obelitz Søe. Argument for this revision can be found here: https://dataethics.eu/why-trust-in-ai-is-not-enough/.

9. HLEG D, p. 2.

10. Smuha, 2019, p. 104.

11. Ibid.

12. Von der Leyen, 2019, p. 13.

13. Moor, 1985, p. 267.

14. Moor, 1985, p. 271.

15. Hughes, 1983, pp. 106–139.

16. Hughes, 1983, p. 107.

17. Hughes, 1987, p. 51.

18. Williams, 1993, p. 6.

19. Williams, 1993, p. 8.

20. Williams, 1993, p. 6.

21. Ibid.

22. Hughes, 1983, p. 15.

23. Hughes, 1987, p. 198.

24. Hughes, 1987, pp. 76–77.

25. Hughes, 1987, p. 69.

26. Ibid.

27. Pickering, 1992, pp. 3–4.

28. Collins, 1987, p. 344.

29. Collins, 1987, p. 338.

30. Collins, 1987, p. 344.

31. Mayer-Schoenberger and Cukier, 2013, p. 98–122.

32. Acker and Clement, 2019, p. 3.

33. Moor, 1985, p. 272.

34. Moor, 1985, p. 273.

35. Collins, 1987, p. 343.

36. Hughes, 1987, p. 15.

37. Brey, 2010, p. 46.

38. Ess, 2013, p. 196.

39. Bowker and Star, 1999, p. 15.

40. I examine the European AI strategy (described in the two communications “Artificial intelligence for Europe” and “Building trust in human-centric artificial intelligence” (April 2019), in the “Declaration of cooperation on artificial intelligence” (April 2018) and the “Coordinated plan on artificial intelligence ‘made in Europe’” (December 2018)) with a core focus on the work of the European high-level group on AI and the two core deliverables of this group: the “Ethics guidelines for trustworthy AI” (April 2019) and the “Policy and investment recommendations for trustworthy AI” (June 2019). The investigation is based on a qualitative reading of these documents and includes perspectives from the process of the very development of these two documents (with reference to public minutes and records from meetings) as well as from concurrent European policy responses.

41. Collins, 1987, p. 344.

42. HLEG B, 2019, pp. 6–7.

43. The AI forum held in Helsinki in October 2018, co-hosted by the Ministry of Economic Affairs and Employment of Finland and the European Commission.

44. Finnish Ministry of Economic Affairs and Employment, at https://tem.fi/en/-/ai-forum-2018-tekoaly-vahvistamaan-euroopan-kilpailukykya.

45. Speech retrieved from https://www.tekoalyaika.fi/en/ai-forum-2018/.

46. HLEG B, 2019, p. 16.

47. HLEG B, 2019, p. 28.

48. Ibid.

49. HLEG B, 2019, p. 16.

50. HLEG B, 2019, p. 9.

51. Speech retrieved from https://ec.europa.eu/digital-single-market/en/news/first-european-aialliance-assembly.

52. HLEG B, 2019, p. 9.

53. HLEG B, 2019, p. 10.

54. HLEG B, 2019, p. 32.

55. HLEG B, 2019, p. 10.

56. See an analysis with key case law references at https://mediamocracy.files.wordpress.com/2010/05/privacy-and-jurisdiction-in-the-network-society.pdf.

57. HLEG B, 2019, p. 28.

58. HLEG B, 2019, p. 3.

 

References

A. Acker and T. Clement, 2019. “Data cultures, culture as data — Special issue of cultural analytics,” Journal of Cultural Analytics (10 April).
doi: https://doi.org/10.22148/16.035, accessed 12 November 2020.

E. Alpaydin, 2016. Machine learning: The new AI. Cambridge, Mass.: MIT Press.

J. Angwin, J. Larson, S. Mattu and L. Kirchner, 2016. “Machine bias — There’s software used across the country to predict future criminals. And it’s biased against blacks,” ProPublica (23 May), at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 12 November 2020.

G.C. Bowker and S.L. Star, 1999. Sorting things out: Classification and its consequences. Cambridge, Mass: MIT Press.

P. Brey, 2010. “Values in technology and disclosive ethics,” In: L. Floridi (editor). Cambridge handbook of information and computer ethics. Cambridge: Cambridge University Press, pp. 41–58.
doi: https://doi.org/10.1017/CBO9780511845239.004, accessed 12 November 2020.

J. Burrell, 2016. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data & Society, volume 3, number 1 (6 January).
doi: https://doi.org/10.1177/2053951715622512, accessed 12 November 2020.

M. Callon and B. Latour, 1992. “Don’t throw the baby out with the bath school! A reply to Collins and Yearley,” In: A. Pickering (editor). Science as practice and culture. Chicago: University of Chicago Press, pp. 343–368.

H.M. Collins, 1987. “Expert systems and the science of knowledge,” In: W.E. Bijker, T.P. Hughes and T. Pinch (editors). The social construction of technological systems: New directions in the sociology and history of technology. Cambridge, Mass.: MIT Press, pp. 51–82.

H.M. Collins and S. Yearley, 1992. “Epistemological chicken,” In: A. Pickering (editor). Science as practice and culture. Chicago: University of Chicago Press, pp. 301–326.

C. D’Ignazio and L.F. Klein, 2020. Data feminism. Cambridge, Mass.: MIT Press.

D. Epstein, C. Katzenbach and F. Musiani, 2016. “Doing Internet governance: Practices, controversies, infrastructures, and institutions,” Internet Policy Review, volume 5, number 3 (30 September).
doi: https://doi.org/10.14763/2016.3.435, accessed 12 November 2020.

C. Ess, 2013. Digital media ethics. Cambridge: Polity Press.

G. Hasselbalch, 2020. “EU’s digital, AI and data strategy lacks ambition on ethics and trustworthy AI” (21 February), at https://dataethics.eu/eus-digital-ai-and-data-strategy/, accessed 12 November 2020.

G. Hasselbalch, 2019. “Making sense of data ethics. The powers behind the data ethics debate in European policymaking,” Internet Policy Review, volume 8, number 2 (13 June).
doi: https://doi.org/10.14763/2019.2.1401, accessed 12 November 2020.

T.P. Hughes, 1987. “The evolution of large technological systems,” In: W.E. Bijker, T.P. Hughes and T. Pinch (editors). The social construction of technological systems: New directions in the sociology and history of technology. Cambridge, Mass.: MIT Press, pp. pp. 51–82.

T.P. Hughes, 1983. Networks of power: Electrification in Western society, 1880–1930. Baltimore, Md.: Johns Hopkins University Press.

N. Kobie, 2018. “The complicated truth about China’s social credit system,“ Wired (7 June), at https://www.wired.co.uk/article/china-social-credit-system-explained, accessed 12 November 2020.

V. Mayer-Schonberger and K. Cukier, 2013. Big data: A revolution that will transform how we live, work and think. London: John Murray.

F. Merz, 2019. “Europe and the global AI race,” CSS analyses in security policy, number 247, at https://css.ethz.ch/content/dam/ethz/special-interest/gess/cis/center-for-securities-studies/pdfs/CSSAnalyse247-EN.pdf, accessed 12 November 2020.

J.H. Moor, 1985. “What is computer ethics?” Metaphilosophy, volume 16, number 4, pp. 266–275.
doi: https://doi.org/10.1111/j.1467-9973.1985.tb00173.x, accessed 12 November 2020.

C. O’Neil, 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. London: Penguin Books.

F. Pasquale, 2015. The black box society: The secret algorithms that control money and information. Cambridge, Mass.: Harvard University Press.

A. Pickering, 1992. “From science as knowledge to science as practice,” In: A. Pickering (editor). Science as practice and culture. Chicago: University of Chicago Press, pp. 1–26.

M. Spielkamp (editor), 2019. “Automating society: Taking stock of automated decision-making in the EU,” at https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf, accessed 12 November 2020.

N.A. Smuha, 2019. “The EU approach to ethics guidelines for trustworthy artificial intelligence,” Computer Law Review International, 20(4), pp. 97–106.

R. Williams, 1993. “Culture is ordinary,” In: A. Gray and J. McGuigan (editors). Studying culture: An introductory reader. London: Edward Arnold, pp. 5–14.

S. Zuboff, 2014. “A digital declaration,” Frankfurter Allgemeine (9 September), at https://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshan-zuboff-on-big-data-as-surveillance-capitalism-13152525.html, accessed 12 November 2020.

C. Åsberg and N. Lykke, 2010. “Feminist technoscience studies,” European Journal of Women’s Studies, volume 17, number 4, pp. 299–305.
doi: https://doi.org/10.1177/1350506810377692, accessed 12 November 2020.

European AI agenda document

European Commission A, 2018. “Artificial intelligence for Europe” (25 April), at https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe, accessed 12 November 2020.

European Commission B, 2018. “Declaration of cooperation on artificial intelligence,” at https://ec.europa.eu/digital-single-market/en/artificial-intelligence#Declaration-of-cooperation-on-Artificial-Intelligence, accessed 12 November 2020.

European Commission C, 2018. “Coordinated plan on artificial intelligence ‘made in Europe’” (7 December), at https://ec.europa.eu/commission/presscorner/detail/ro/memo_18_6690, accessed 12 November 2020.

European Commission D, 2019. “Building trust in human-centric artificial intelligence” (9 April), at https://ec.europa.eu/digital-single-market/en/news/communication-building-trust-human-centric-artificial-intelligence, accessed 12 November 2020.

European Commission E, 2020. “Shaping Europe’s digital future,” at https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/shaping-europe-digital-future_en, accessed 12 November 2020.

European Commission F, 2020. “A European strategy for data,” at https://ec.europa.eu/digital-single-market/en/european-strategy-data, accessed 12 November 2020.

European Commission G, 2020. “White paper on artificial intelligence: A European approach to excellence and trust” (18 February), at https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en, accessed 12 November 2020.

Finnish Ministry of Economic Affairs and Employment, 2018. “AI Forum 2018: Artificial intelligence to boost European competitiveness” (13 September), at https://tem.fi/en/-/ai-forum-2018-tekoaly-vahvistamaan-euroopan-kilpailukykya, accessed 12 November 2020.

HLEG A, High-Level Expert Group on Artificial Intelligence, 2019. “Ethics guidelines for trustworthy AI” (8 April), at https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai, accessed 12 November 2020.

HLEG B, High-Level Expert Group on Artificial Intelligence, 2019. “Policy and investment recommendations for trustworthy AI” (26 June), at https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence, accessed 12 November 2020.

HLEG C, High-Level Expert Group on Artificial Intelligence, 2018. “Minutes of the first meeting” (27 June), at https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeeting%20&meetingId=5190, accessed 12 November 2020.

HLEG D, High-Level Expert Group on Artificial Intelligence, 2018. “Report of the AI HLEG workshop of 20 September 2018,” at https://ec.europa.eu/futurium/en/european-ai-alliance/report-ai-hleg-workshop-2092018, accessed 12 November 2020.

Organisation for Economic Co-operation and Development (OECD), 2019. “Recommendation of the Council on Artificial Intelligence,” at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449, accessed 12 November 2020.

U. von der Leyen, 2019. “A Union that strives for more. My agenda for Europe: Political guidelines for the next European Commission 2019–2024,” at https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf, accessed 12 November 2020.

 


Editorial history

Received 26 June 2020; revised 21 August 2020; accepted 16 September 2020.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Culture by design: A data interest analysis of the European AI policy agenda
by Gry Hasselbalch.
First Monday, Volume 25, Number 12 - 7 December 2020
https://firstmonday.org/ojs/index.php/fm/article/download/10861/10010
doi: https://dx.doi.org/10.5210/fm.v25i12.10861