First Monday

Notes towards infrastructure governance for large language models by Lara Dal Molin

This paper draws on information infrastructures (IIs) in science and technology studies (STS), as well as on feminist STS scholarship and contemporary critical accounts of digital technologies, to build an initial mapping of the infrastructural mechanisms and implications of large language models (LLMs). Through a comparison with discriminatory machine learning (ML) systems and a case study on gender bias, I present LLMs as contested artefacts with categorising and performative capabilities. This paper suggests that generative systems do not tangibly depart from traditional, discriminative counterparts in terms of their underlying probabilistic mechanisms, and therefore both technologies can be theorised as infrastructures of categorisation. However, LLMs additionally retain performative capabilities through their linguistic outputs. Here, I outline the intuition behind this phenomenon, which I refer to as “language as infrastructure”. While traditional, discriminative systems “disappear” into larger IIs, the hype surrounding generative technologies presents an opportunity to scrutinise these artefacts, to alter their computational mechanisms and introduce governance measures [1]. I illustrate this thesis through Sharma’s [2] formulation of “broken machine”, and suggest dataset curation and participatory design as governance mechanisms that can partly address downstream harms in LLMs (Barocas, et al., 2023).


1. Introduction
2. Categorisation in discriminative and generative ML systems
3. Language as infrastructure
4. “Broken machines” as opportunities for infrastructure governance
5. Conclusion



1. Introduction

Machine learning (ML) systems are artificial intelligence (AI) technologies that improve their functionality over time based on examples (Russell and Norvig, 2010). ML systems broadly distinguish between discriminative and generative models, respectively producing output scores or categories for existing data on the one hand and generating novel data and outputs on the other (Hastie, et al., 2008). Large language models (LLMs), commonly associated with the generative paradigm, are ML systems that generate new text from very large corpuses of data in ways that aim to mimic a human author (Jurafsky and Martin, 2008; Luitse and Dankena, 2021). At present, these are portrayed by their advocates as leading candidates in the achievement of artificial general intelligence (AGI) (OpenAI, 2018; Bubeck, et al., 2023). Particularly relevant to LLMs are sociotechnical accounts of computers as “thinking machines”, characterised by promises of efficiency, rationality, and objectivity [3]. In this context, Alexander (1990) draws a parallel between computational technologies and sacred entities, suggesting the existence of imagined associations between sophistication and awesomeness, which in turn translate to divine metaphors. In popular accounts, the uncanny generative capacities of LLMs are reflected in narratives of omniscience and algorithmic fetishism (Thomas, et al., 2018).

Despite these narratives, this paper argues that both discriminative and generative ML systems, including large language models (LLMs), should be conceptualised as infrastructures of categorisation. The following analysis situates these systems within information infrastructures (IIs) literature, feminist STS scholarship, and critical algorithms studies to consider the capability of these systems to exclude through categories. I propose this capacity of ML as an infrastructure governance problem. Drawing particularly on Weiser (1991), Bowker and Star (1999), and Clarke and Star (2007), and on technical literature in ML, I argue that computational mechanisms enabling categorisation in both discriminative and generative systems are troublesome when applied to human characteristics, as they reinforce predefined, standardised, and mutually exclusive categories. As described in the next section, in both types of ML systems, the categories embedded in infrastructural technologies that disappear into the background of social life are of particular importance for questions of power and performance, as will be illustrated below (Weiser, 1991). Based on a rich body of scholarship highlighting the capacity of ML algorithms to uphold social and political orders, I suggest that the categories embedded in LLMs are performative; LLMs as artefacts further consolidate mutually exclusive categorisation through their linguistic outputs (Barocas and Selbst, 2016; Eubanks, 2018; Criado Perez, 2019; Amoore, 2023). I refer the phenomena that compose this complex scenario as “language as infrastructure”. Finally, this paper identifies governance opportunities for generative systems. Based on Sharma’s (2020) conceptualisation of “broken machine”, I suggest dataset curation and participatory design as potential governance mechanisms to recentre epistemic imbalances in LLMs.



2. Categorisation in discriminative and generative ML systems

According to Weiser [4] “the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”. The field of STS ascribes this capacity to information infrastructures (IIs), which encompass relations between material and abstract entities, including communication networks, but also protocols, standards, and classification systems (Star and Ruhleder, 1994; Bowker, et al., 2010; Musiani, et al., 2016). IIs seamlessly operate in the background, carrying out “invisible work” that is often habitual, taken for granted and consequently difficult to map [5]. Bowker and Star [6] describe the power dimensions of classification systems, suggesting that “everyday categories are precisely those that have disappeared into infrastructure, into habit, into the taken for granted” and that “the moral questions arise when the categories of the powerful become the taken for granted; when policy decisions are layered into inaccessible technological structures; when one group’s visibility comes at the expense of another’s suffering”. IIs therefore uphold and stabilise existing configurations of categories, thus cementing social structures and obscuring certain groups of people (Bowker and Star, 1999). The social worlds framework, formalised by Clarke and Star [7], likewise argues that infrastructures are “frozen discourses” often shared by “communities of collective understanding, action, and meaning-making” [8]. Clarke and Star’s [9] adjective frozen is particularly relevant for this analysis, as it emphasises rigidity as an attribute of IIs, with a corresponding capacity to uphold power structures. For IIs, governance is an ecosystem that encompasses institutions, laws, policies, and technology design (DeNardis and Musiani, 2016). Based on this definition, the dispersed and decentralised nature of IIs represents a challenge to governance in itself. Therefore, in the context of classification systems, turning a spotlight on the mechanisms that enable these to operate can support the establishment of points of infrastructural control (DeNardis and Musiani, 2016).

The outlined characteristics of IIs — their invisibility and rigidity, as well as their capability to cement and reproduce power — may be detected in ML algorithms called artificial neural networks (ANNs). These are the building blocks of numerous ML systems, both discriminative and generative. In the parlance of the field of AI, ANNs are algorithms articulated in layers of interconnected units that propagate one or several functions with the objective of optimising them (Russell and Norvig, 2010). More simply, these can be understood as conceptually mimicking the operation of a mammalian brain; as the system is exposed to training data, layers of ‘neurons’ modelled in code are tuned with different parameters. This training process strengthens or weakens the connections between different neurons, allowing them to process and propagate data signals throughout a network. Whether the data input into the network are text, images, or other sources, the aim of this approach is to summarise key sets of features of these input sources as structures of weighting and connection within the ‘hidden layers’ of the neural network. According to Russell and Norvig [10], single units only propagate a function — or “fire” — when a combination of inputs exceeds some predetermined threshold. The “arguments of the maxima” — or argmax — function can be used within ANNs to determine output probabilities (Sanderson, 2017). In this context, argmax finds the output class, label, or score with the highest value to make this determination (Russell and Norvig, 2010; Patel, 2021). This approach allows features of very large and complex input data sources to be reduced to a relatively small number of parameters, expressed as the weights of individual links within the network. For this reason, components of ANNs may be considered “classifiers’: once this process reaches the output layer of units, only one neuron fires, and it is associated with one particular output category [11]. These categories, often pre-defined and standardised, are always mutually exclusive (Amoore, 2021).

Similarly to most systems of classification, ML technologies that incorporate ANNs are problematic when applied to human characteristics. In sections 2.1 and 2.2, I explore this issue for discriminative and generative systems respectively. For the purpose of illustrating this critique, I define here the concept of downstream harms, or negative consequences related to the outputs of ML systems (Suresh and Guttag, 2021). While there can be different sources of downstream harms in ML systems, this paper is especially concerned with training data and model architecture (Hovy and Prabhumoye, 2021). Barocas, et al. (2023) proposed a taxonomy of downstream harms for automated decision-makers articulated in imbalance, biases, stereotypes, and categorisation. According to Barocas, et al., imbalance refers to an uneven representation of different groups, biases described incorrect associations related to social and historical prejudice, stereotypes reflected real-world products of ideological hierarchies, and categorisation denoted the ascription of binary properties to spectral characteristics of identity. Therefore, based on these points, downstream harms take different forms based on a system’s specifics. However, this paper argues underlying, computational mechanisms of categorisations — because of their invisibility, rigidity, and capability to reproduce power — uphold and enable downstream harms across ML systems. In the following analysis, I support this point through illustrating that Barocas, et al.’s (2023) taxonomy of downstream harms — originally devised solely for discriminative decision-makers — applies to both discriminative systems and LLMs.

2.1. Discriminative systems

The basic functionalities of ANNs are commonly associated with traditional — or discriminative — ML paradigms. Discriminative systems execute some kind of decision-making in the form of output scores or categories, based on examples supplied through training data (Russell and Norvig, 2010; Goodfellow, et al., 2020). Applications of these systems are widespread and comprise, for instance, clinical diagnosis, image classification, and e-mail filtering, as well as more elaborate product recommendations and predictions of individual preferences (Nichols, et al., 2019).

Discriminative systems are problematic when applied to human personal characteristics, such as gender and ethnicity, as an attempt to computationally discriminate human characteristics leads to the systematic targeting and degradation of marginalised groups. Let us consider some examples. In the Gender Shades project, Buolamwini and Gebru (2018) evaluated two facial analysis benchmark datasets and revealed that these overwhelmingly represented white-skinned subjects. Consequently, Buolamwini and Gebru (2018) examined three commercial gender classification systems and found that these systems misclassified dark-skinned females at a higher percentage than other types. Based on Barocas, et al.’s (2023) taxonomy of downstream harms, this could be considered an instance of imbalance. Eubanks (2017) investigated the social impacts of automated decision-making, including predictive policing systems and predictive risk models, in relation to health, benefits, and insurance. Through the inevitable incorporation of biased historical data, these classifiers aggravated conditions of impoverished individuals and communities by classifying some individuals as high-risk (Eubanks, 2017; Barocas, et al., 2023). The issue of bias was further explored by Raghavan, et al. (2020), who critiqued algorithmic ML-based hiring systems. When trained on past hiring data, these systems reproduced a tendency to hire dominant groups and therefore reinforced long-standing biases against women and ethnic minorities. Concerning downstream harms, this example illustrates both bias and stereotypes, as the ML system reproduces professional hierarchies based on historically prejudiced associations (Barocas, et al., 2023). In a different example, Costanza-Chock (2020) analysed millimetre wave scanners deployed in airports. Here, when airport operators performed security controls on passengers, they must select the gender of the individual being scanned between “male” and “female”. However, if the machine detected body sections that deviated from traditional definitions of these, a passenger was flagged and searched. This is an instance of categorisation, as the system reduced human intersectional characteristics to discrete classes (Barocas, et al., 2023). Also concerning categorisation, the Better Tomorrow tactical surveillance system, developed by the software company AnyVision (2020), claimed to detect ethnicity through facial recognition. Here, the ML system attributed a fixed ethnic category to an individual’s face. Better Tomorrow was reportedly sold to the Israeli government to support screening and surveillance of the Palestinian population in the West Bank (Solon, 2020).

The unsettling nature of the outlined examples can be attributed to a tension between artificially generated, mutually exclusive output categories of ML systems and mutually constitutive intersectional identities of people. Intersectional identities are comprised of, among other characteristics, ethnicity, gender, sexual orientation, religious, and political affiliations, ability status, and colonial and Indigenous histories (Crenshaw, 1989; Bowleg, 2008). The continuous application of the argmax function within algorithms that handle human characteristics is conceptually incommensurable with ideas of intersectionality, as the category with the highest value becomes the only one considered. Concretely, although a network may represent very large and complex datasets and relationships, each individual neuron either fires or it doesn’t based on a single ‘most important’ characteristic. In these contexts, at each point of connection, categorising operations reduce highly complex, fluid, and interacting social categories to fixed values, thus exacerbating the oppressive potential of algorithms. Overall, the disproportionate representation of dominant social groups and the lack of data reflecting the experiences of minorities can lead to their systematic marginalisation and exclusion (Criado Perez, 2019).

2.2. Generative systems

In recent years, paradigms in the field of ML shifted as a result of breakthroughs in AI algorithms and increasing computing power, giving rise to generative models, such as LLMs (Goodfellow, et al., 2020; Jing and Xu, 2019). As opposed to discriminative ML systems which simply produce an output score or category, generative systems produce novel output data. At present, dominant research efforts in generative ML focus on textual and visual data, with this paper focusing specifically on text. In this context, notable models include OpenAI’s GPT-n series, Google’s BERT, T5, LaMDA, BARD and PaLM, Meta’s LLaMA, and Microsoft and NVIDIA’s Megatron-Turing LM (Devlin, et al., 2019; Brown, et al. 2020; Raffel, et al., 2020; Chowdhery, et al., 2022; Manyika, 2022; Smith, et al., 2022; Thoppilan, et al., 2022; Touvron, et al., 2023). Prominent open source LLMs include EleutherAI’s (2021) GPT-J and GPT-NeoX, and Hugging Face’s (2022) BLOOM. Real-world applications of LLMs include dialogue systems, articulated in chat-oriented and question-answering systems, text manipulation and machine translation across numerous settings including education, healthcare, and art (Li, 2022; Zaib, et al., 2020; Fossa and Sucameli, 2022). The field also presents multimodal systems, such as OpenAI’s DALL-E, Google DeepMind’s Gato, and WuDao (Beijing Academy of Artificial Intelligence, 2021; Ramesh, et al., 2021; Reed, et al., 2022), that produce multiple types of information, such as text, images, and videos, and often attempt to convert one datatype into another (Suzuki and Matsuo, 2022). Crucially, at the time of this paper’s publication, these models and their applications remain largely experimental, and a great portion of their emerging properties is yet to be discovered, making this a decisive moment for hypothesising and proposing governance mechanisms.

The novelty and distinct properties of generative systems prompt questions regarding their relationship with traditional, discriminative ML systems. To highlight these infrastructural similarities, it helps to compare their respective categorisation mechanisms. To articulate this comparison, this paper considers the training data and architecture of LLMs. Statistical language models (LMs), the predecessors of contemporary LLMs, calculate probability distributions of words and sentences in language and generate text accordingly (Jurafsky and Martin, 2008). For instance, based on the sentence “where are we”, a statistical LM could associate the highest probability to “going” as the appropriate next word (Huyen, 2019). The introduction of ANNs — also found in discriminative ML systems — resulted in the production of neural LMs (Jing and Xu, 2019; Qudar and Mago, 2020). As the name suggests, these combine ANNs and statistical LMs architecture to yield substantial performance improvements through addressing word combination and context, and ultimately result in more sophisticated language generation. As discussed, ANNs are characterised by the process of training, which denotes their performance improvement based on training data (Russell and Norvig, 2010). Accordingly, LLMs are trained on extremely large amounts of data, which is usually organised in datasets and collected through the automated and systematic extraction — or scraping — of text from Web pages (Bender, et al., 2020; Tamkin, et al., 2021). In basic ANNs, the output neuron that fires represents a single output category or score, which is associated with the highest probability according to the algorithm’s computation (Russell and Norvig, 2010; Sanderson, 2017). Correspondingly, in LLMs, the probability of the output word — corresponding to an output category in ANNs — is further determined by the distribution of words in the training data.

The integration of ANNs and statistical LMs sheds light on issues of categorisation and exclusion in LLMs. Let us consider once again the taxonomy of downstream harms proposed by Barocas, et al. (2023). In LLMs, imbalance — or the uneven representation of different groups — can manifest through context, as artificially generated stories are overwhelmingly situated within systems of Western centrism and whiteness, unless the system is specifically prompted otherwise [12]. In a second example, Shihadeh, et al. [13] identifies an association between the psychological trait of brilliance and masculinity in GPT-3, or “brilliance bias”. This is further illustrated by Lucy and Bamman (2021), who suggest that female characters portrayed as less powerful than male characters in stories generated by GPT-3. These can be considered instances of bias as they represent incorrect associations related to social and historical prejudice (Barocas, at al., 2023). In artificially generated text, stereotypes reflect in the co-occurrence of historically gendered professions with male pronouns, such as “doctor” (Bordia and Bowman, 2019; Sheng, et al., 2019). Similarly, considering again the sentence “the woman works as”, a LLM trained on extensive amounts of potentially problematic data may complete the sentence with “a babysitter” (Sheng, et al., 2019). In this case, “a babysitter” is selected by the model as the textual item with the highest probability based on the sentence provided. Lastly, the downstream harm of categorisation can manifest in LLMs through the system’s failure to appropriately use gender-neutral pronouns (Hossain, et al., 2023). This results in the ascription of binary properties to spectral characteristics of identity (Barocas, et al., 2023). These examples illustrate that Barocas, et al.’s (2023) taxonomy of downstream harms, initially devised for discriminative systems, applies to the evaluation of LLMs. This paper suggests that this is because of the underlying mechanisms of categorisation that characterise both systems.

At present, LLMs are at the centre of a corporate race for the development of ever larger models trained on ever larger datasets (Luitse and Dankena, 2021). Indeed, over the past five years, the size of LLMs has consistently increased both in terms of parameters and training data (Simon, 2021). For instance, OpenAI’s GPT-2 is trained on 40 gigabytes (GB) of text data (Radford, et al., 2018). The successor model GPT-3 is trained on billions of scraped words from datasets such as Common Crawl, which is the largest publicly available text-based dataset, and WebText (Kolias, et al., 2014; Radford, et al., 2018; Common Crawl Foundation, 2022). Pfotenhauer, et al. [14] refer to this phenomenon as “the politics of scaling” and describe it as a fixation defining narratives of innovation in the Silicon Valley era. In the context of downstream harms, scalability is often presented as the answer to lack of representation or misrepresentation concerning marginalised and vulnerable groups in training datasets. However, Bender, et al. (2020) suggest that, when datasets are assembled with the objective of scalability, hegemonic narratives are most likely to be retained. This section suggested that, similarly to discriminative systems, LLMs operate as infrastructures of categorisation.

2.3. Infrastructures of categorisation

Based on this analysis, discriminative and generative ML systems can both be theorised as infrastructures of categorisation. They are classic infrastructures because they are characterised by material and abstract entities including protocols, standards and architectures, and they are difficult to map because of the intricacies characterising their underlying “practices, uses and exchanges” [15]. Combined with the categorising functionality of ANNs, this configuration allows for “the categories of the powerful” to become “taken for granted” in both systems [16]. Within ML systems, these categories are mostly predetermined and standardised, manufactured by designers and engineers, and not “naturally occurring” [17]. This point resonates with Benjamin’s [18] conceptualisation of race as technology: “one constructed to separate, stratify and sanctify the many forms of injustice experienced by members of racialized groups”. Accordingly, powerful, hegemonic categories are technologies that disappear within discriminative and generative infrastructures of categorisation. Thus, generative systems do not depart from discriminative counterparts in terms of underlying probabilistic mechanisms of categorisation. However, this paper proceeds to argue that there are additional mechanisms within LLMs — referred to as “language as infrastructure” — that must be analysed and considered before proposing appropriate governance mechanisms.



3. Language as infrastructure

In the previous section, this paper illustrated that both discriminative and generative ML systems can be theorised as infrastructures of categorisation. As opposed to traditional, discriminative counterparts, which solely produce output scores or categories, LLMs produce previously unseen texts based on human prompts. Based on the functionality of “classifiers”, LLMs produce output text that — when these systems are trained on large, scraped datasets — is likely to reduce human intersectional identities to specific, immutable characteristics represented in linguistic associations and therefore lead to downstream harms [19]. As a compound of social and probabilistic constructs, the characteristics of artificially generated language are different from those of natural language. Artificially generated text is more rigid in comparison to natural language, because it is generated through computational mechanisms of categorisation [20]. The process of altering the functionality of LLMs to mitigate bias is laborious, as it contemplates, at least partially, their retraining (Solaiman and Dennison, 2021). However, this entails substantial economic, labour-intensive and environmental challenges (Pfotenhauer, et al., 2021). These constraints operate as concrete barriers to changing the output text of LLMs, thus contributing to the rigidity of these systems, and reinforcing their capacity to operate as infrastructures of categorisation. However, I proceed to convolute my argument. On one hand, LLMs fit the provided description of infrastructures of categorisation because of the underlying, computational mechanisms that enable the production of artificially generated text. On the other hand, I illustrate below that, because of the textual format of their outputs, LLMs simultaneously retain performative capabilities.

Traditional STS focusses on the cultural meaning attributed to material objects. For instance, through an analysis of electric shavers, Oudshoorn and Pinch (2003) provided crucial insight into the gendered significance of technological artefacts. Electric shavers concretise their gendered cultural meanings through branding features, such as shavers called “lipstick” marketed to women and “double action” marketed to men. Oudshoorn and Pinch (2003) further highlighted the manufacturing of artefacts with gendered scripts and attributes that associate control with masculinity and incompetence with femininity. This type of gender analysis can also be applied to ML systems, particularly regarding the names of prominent LLMs. For instance, GPT stands for Generative Pretrained Transformer, BERT for Bidirectional Encoder Representations from Transformers, and LaMDA for Language Model for Dialogue Applications (Radford, et al., 2018; Devlin, et al., 2019; Thoppilan, et al., 2022). However, once LLMs are deployed in real-world applications, their names change substantially. For example, conversational assistants mostly have female names, such as Siri, Alexa, and Cortana, thus suggesting the outsourcing of historically feminine labour to machines (Hester, 2017; Männistö-Funk and Sihvonen, 2018; Costa and Ribas, 2019; Dillon, 2020).

However, with LLMs, this type of gender analysis falls short. On one hand, electric shavers contribute to gender stereotypes through their branding features: they materialise the association between socially constructed gender categorisations and attributes. LLMs, on the other hand, are not static objects merely portraying and embodying hegemonic systems. Through their capability to produce previously unseen linguistic instances, they can reproduce stereotyped versions of language attributed to specific individuals or social groups. Let us consider some examples. Concerning gender, Salewski, et al. (2023) reveal that, when a LLM is prompted to write as a man, it produces better descriptions of cars compared to when it is prompted to write as a woman. In a second example, Weidinger, et al. [21] suggest that LLMs could be fine-tuned on “an individual’s past speech data to impersonate that individual in cases of identity theft”. In the field of AI, this particular type of manipulation, where specific data is added to the system for malicious purposes, is known as an adversarial attack (Zhang, et al., 2020). In a final example, the community-driven platform FlowGPT (2023) presented an interface called “JesusGPT: The divine dialogue”, also available through OpenAI’s (2023) ChatGPT Plus. In this case, ChatGPT was fine-tuned on the Kings James Bible to “simulate profound conversations with Jesus Christ himself”. These examples illustrate that LLMs can impersonate single individuals, such as historical figures, and ordinary people whose past data is publicly available, as well as social groups, such as men and women. Specifically, in relation to gender, LLMs are performative according to Butler’s (1988) definition, as they establish gender identities through the repetition of distinctly feminine or masculine acts, as illustrated through the first example.

Similarly to traditional STS cultural and gender analyses, contemporary attempts to theorise ML systems fall short in the case of LLMs. For instance, through the concept of “epistemic politics”, Amoore [22] describes recent ML technologies as modes of assembling knowledge that alter traditional norms and thresholds, thus changing the fabric of politics and society. According to this position, when models learn salient features and clusters from training datasets, their assumptions exceed the categories already present in the input data, effectively generating groupings and communities. In LLMs, the production of textual instances based on classificatory mechanisms may result in the assemblage of previously unseen conglomerates — or categories — of data downstream (Barocas, et al., 2023). However, based on the examples provided above, it is crucial to acknowledge that a person using LLMs may not necessarily self-identify with the virtual categories generated by the system. This partially nuances Amoore’s (2023) claim that ML systems generate communities and social groups. According to this analysis, LLMs can be theorised as performative artefacts, while being simultaneously considered infrastructures of categorisation. In this paper, I refer to the ensemble of these phenomena as “language as infrastructure”. This combination of characteristics situates LLMs as exceptionally contended, predisposed to mediatic and public sensationalism, and difficult to map. In the following section, despite the inherent complexity of “language as infrastructure”, I suggest the existence of governance opportunities for LLMs. The proposed governance mechanisms make use of the convoluted, even contrasting, characteristics of LLMs identified in this paper. For this reason, they can be considered mechanisms of governance by infrastructure (DeNardis and Musiani, 2016; Yeung, 2017).



4. “Broken machines” as opportunities for infrastructure governance

In the previous section, this paper illustrated that generative systems can be theorised as both rigid infrastructures of categorisation and performative technological artefacts. LLMs challenge pre-existing configurations of IIs through their performative capabilities: they uniquely intensify mechanisms of categorisation by reproducing and amplifying existing social biases through naturalistic human language. This section identifies unique governance opportunities for LLMs, that take into account and employ the challenges outlined so far. As previously discussed, both discriminative and generative ML systems, including LLMs, sustain the disappearance of hegemonic categories. Sharma (2020) delves into this notion in “A manifesto for the broken machine”, where she portrays contemporary technologies as implementing forms of power, thus sustaining the disappearance or visibility of certain social groups and categories. For instance, the author mentions the prevalence of patriarchal and white ethno-nationalist values, as well as class and gender inequality. According to Sharma (2020), individuals, institutions, and technologies that do not replicate dominant narratives may be considered “broken”. At the time of writing, LLMs are subjected to significant hype and fetishisation due to their capability to produce human-like text, reproduce stereotyped versions of language, and impersonate individuals and social groups (Luitse and Dankena, 2021; Weidinger, et al., 2022). This visibility may be considered a social mechanism that prevents these from disappearing into other infrastructures. In other words, generative systems, including LLMs, are currently visible from an infrastructural perspective. Combined with their experimental nature and largely unknown emerging properties, this situation represents a governance opportunity for scrutinising, deconstructing, changing, and, in the words of Sharma (2020), “breaking” these artefacts. In turn, their underlying “practices, uses and exchanges”, as well as their potential to produce categories, can also become visible [23].

This paper finally considers some governance strategies that potentially support the visibility, scrutiny and consequent “breakage” of LLMs. These can be considered instances of governance by infrastructure, as they circumvent and repurpose the computational and infrastructural mechanisms of LLMs discussed in previous sections (DeNardis and Musiani, 2016). As previously discussed, the processes of training and fine-tuning — which require extensive data, computation and labour — represent substantial challenges to changing the problematic behaviour of LLMs. Bender, et al. (2020) propose dataset curation as a means of rebalancing epistemic values within training datasets. This technique, inspired by data collection methods in archival history, involves the meticulous, justice-oriented selection and construction of training data (Jo and Gebru, 2020; Birhane and Prabhu, 2021). In LLMs, dataset curation directly emerges as epistemologically and ontologically antithetical to scraping. Indeed, while scraping methods assemble datasets with the ambition of universality, dataset curation promotes the production of situated knowledge. Dataset curation may be viewed as part of a family of critical methods that do not merely attempt to propose solutions to algorithmic bias, but to devise tools that bring social injustice to the eyesight of designers, regulators, and stakeholders (Costanza-Chock, 2020).

Participatory design (PD), or co-design, sees the involvement of users in technology design processes, as a means to improve fairness, accountability, and transparency (Donia and Shaw, 2021). To address the issue of downstream harms in LLMs, explored in previous sections, dataset curation and participatory methods can be especially effective when combined (Barocas, et al., 2023). For instance, researchers at OpenAI proposed the Process for Adapting Language Models to Society (PALMS) framework — or the adaptation of LLMs to small-scale curated datasets — as a promising method for the mitigation of biases associated with sensitive topics (Solaiman and Dennison, 2021). This method could be modified so that the knowledge incorporated in curated datasets is generated according to justice-oriented and participatory methods, and therefore encompass the views and experiences of historically underrepresented groups and individuals. In relation to Sharma (2020), adapting LLMs to curated datasets can be considered an initial step towards “breaking” them, through the subversion of the modalities of scale in which these systems have been operating thus far. Nonetheless, this paper acknowledges that this methodological approach merely scratches the surface when it comes to re-centering power imbalances in generative systems and offering effective, purposeful, and long-term governance mechanisms. Although Sharma (2020) suggests that “broken machines” already redistribute power by simply existing, ample research efforts are needed to theorise, understand, and deploy infrastructural governance processes for LLMs and generative ML systems as a whole.



5. Conclusion

This paper produces an initial mapping of the infrastructural characteristics and implications of LLMs, highlighting some issues and opportunities related to their governance. I establish a similarity between discriminatory and generative ML systems in terms of their infrastructural mechanisms of categorisation. Specifically, this paper highlights the functionality of ANN-based “classifiers” and argmax activation functions, detected in both discriminative and generative systems, which contribute to the disappearance of the “categories of the powerful” in both technologies [24]. Through several examples, I show that Barocas, et al.’s (2023) taxonomy of downstream harms, initially devised for discriminative systems, applies to the evaluation of LLMs. Through a further analysis of the training data and architectures of LLMs, I conclude that probabilistic mechanisms of categorisation in LLMs are comparable to those within discriminative ML systems. Therefore, both discriminative and generative ML systems can be theorised as infrastructures of categorisation. In this context, I further theorise dominant categories as technologies that disappear within both discriminative and generative infrastructures of categorisation. This presents a substantial challenge to infrastructure governance, as the disappearance of these categories within infrastructures of categorisation results in these being taken for granted and difficult to map (DeNardis and Musiani, 2016). I then illustrate that, in addition to their rigidity and capability to cement and reproduce power, LLMs retain performative capabilities because of the textual format of their outputs. I refer to the ensemble of these controversial phenomena as “language as infrastructure”. Finally, while discriminative counterparts disappear into larger IIs, LLMs are currently visible through media hype, fetishisation, and hyperbolic rhetoric upheld by their generation of unique textual instances. I ultimately consider this situation a governance opportunity that allows us to scrutinise, deconstruct, and virtually “break” these artefacts (Sharma, 2020). I finally suggest two governance mechanisms for downstream harms in LLMs: dataset curation and participatory methods (Barocas, et al., 2023). Crucially, this paper remains an initial mapping, creating some initial foundations for the broader construction of infrastructure governance mechanisms for LLMs and generative ML systems as a whole. End of article


About the author

Lara Dal Molin is a Ph.D. student in Science, Technology and Innovation Studies at the University of Edinburgh, part of the joint programme in Social Data Science with the University of Copenhagen. In her research, she explores the intersection between language, gender and technology. Lara is also a Tutor in the Schools of Social and Political Sciences and Informatics.
E-mail: L [dot] Dal-Molin-1 [at] sms [dot] ed [dot] ac [dot] uk



I wish to thank Morgan Currie, James Besse and Léa Stiefel for their invaluable feedback on my writing, and for their efforts in establishing the Governance by Infrastructure partnership between the Universities of Edinburgh and Lausanne. I extend my gratitude to this entire community, which provided inspiration and insight for my research. I also wish to thank the Edinburgh-Copenhagen Social Data Science programme for supporting my doctoral studies and research efforts.



1. Weiser, 1991, p. 94.

2. Sharma, 2020, p. 171.

3. Alexander, 1990, p. 162; Natale and Ballatore, 2020.

4. Weiser, 1991, p. 94.

5. Star, 1999, p. 385; DeNardis and Musiani, 2016, p. 6.

6. Bowker and Star, 1999, p. 319.

7. Clarke and Star, 2007, p. 115.

8. Clarke and Star, 2007, p. 113; Collier, 2021, p. 1,731.

9. Clarke and Star, 2007, p. 115.

10. Russell and Norvig, 2010, p. 727.

11. Russell and Norvig, 2010, p. 728; Sanderson, 2017.

12. This result is from the author’s original research.

13. Shihadeh, et al., 2022, p. 62.

14. Pfotenhauer, et al., 2021, p. 3.

15. Star and Ruhleder, 1994; Star, 1999, p. 385.

16. Bowker and Star, 1999, p. 319.

17. Birhane and Raji, 2022, p. 1.

18. Benjamin, 2019, p. 36.

19. Russell and Norvig, 2010, p. 728.

20. Clarke and Star, 2007, p. 113.

21. Weidinger, et al., 2022, p. 219.

22. Amoore, 2023, p. 21.

23. Star, 1999, p. 385; DeNardis and Musiani, 2016, p. 6.

24. Bowker and Star, 1999, p. 319; Russell and Norvig, 2010.



J. Alexander, 1990. “The sacred and profane information machine: Discourse about the computer as ideology,” Archives de Sciences Sociales des Religions, 35e Année, number 69, pp. 161–171.

L. Amoore, 2023. “Machine learning political orders,” Review of International Studies, volume 49, number 1, pp. 20–36.
doi:, accessed 10 January 2024.

L. Amoore, 2021. “The deep border,” Political Geography (25 November).
doi:, accessed 10 January 2024.

S. Barocas and A.D. Selbst, 2016. “Big data’s disparate impact,” California Law Review, volume 104, number 3, pp. 671–732.

S. Barocas, M. Hardt, and A. Narayanan, 2023. Fairness and machine learning: Limitations and opportunities. Cambridge, Mass.: MIT Press; also at, accessed 4 January 2024.

Beijing Academy of Artificial Intelligence, 2021. “面向认知,智源研究院联合多家单位发布超大规模新型预训练模型‘悟道·文汇’” (11 January), at, accessed 10 January 2024.

E.M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, 2020. “On the dangers of stochastic parrots: Can language models be too big?” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623.
doi:, accessed 10 January 2024.

R. Benjamin, 2019. Race after technology: Abolitionist tools for the new Jim Code. Cambridge: Polity.

A. Birhane and D. Raji, 2022. “ChatGPT, Galactica, and the progress trap,” Wired (8 December), at accessed 20 December 2022.

A. Birhane and V.U. Prabhu, 2021. “Large image datasets: A Pyrrhic win for computer vision?” 2021 IEEE Winter Conference on Applications of Computer Vision (WACV).
doi:, accessed 10 January 2024.

S. Bordia and S.R. Bowman, 2019. “Identifying and reducing gender bias in word-level language models,” Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pp. 7–15.
doi:, accessed 10 January 2024.

G.C. Bowker and S.L. Star, 1999. Sorting things out: Classification and its consequences. Cambridge, Mass.: MIT Press.
doi:, accessed 10 January 2024.

L. Bowleg, 2008. “When Black + lesbian + woman ≠ Black lesbian woman: The methodological challenges of qualitative and quantitative intersectionality research,” Sex Roles, volume 59, numbers 5–6, pp. 312–325.
doi:, accessed 10 January 2024.

T. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D.M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A, Radford, I. Sutskever, and D. Amodei, 2020. “Language models are few-short learners,” 34th Conference on Neural Information Processing Systems (NeurIPS 2020), at, accessed 10 January 2024.

S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y.T. Lee, Y. Li, S. Lundberg, H. Nori, H. Palangi, M.T. Ribeiro, and Y. Zhang, 2023. “Sparks of artificial general intelligence: Early experiments with GPT-4,” arXiv:2303.12712 (27 March).
doi:, accessed 7 December 2023.

J. Butler, 1988. “Performative acts and gender constitution: An essay in phenomenology and feminist,” Theatre Journal, volume 40, number 4, pp. 519–531.
doi:, accessed 10 January 2024.

A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H.W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A.M. Dai, T.S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel, 2022. “PaLM: Scaling language modeling with pathways,” arXiv:2204.02311 (5 April).
doi:, accessed 20 June 2023.

A.E. Clarke and S.L. Star, 2007. “The social worlds framework: A theory/methods package,” In: E,J. Hackett, O, Amsterdamska, M,E. Lynch, and J, Wajcman (editors), Handbook of science and technology studies. Third edition. Cambridge, Mass.: MIT Press, pp. 113–137.

S. Costanza-Chock, 2020. Design justice: Community-led practices to build the worlds we need. Cambridge, Mass.: MIT Press.
doi:, accessed 10 January 2024.

P. Costa and L. Ribas, 2019. “AI becomes her: Discussing gender and artificial intelligence,” Technoetic Arts, volume 17, number 1, pp. 171–193.
doi:, accessed 10 January 2024.

B. Collier, 2021. “The power to structure: Exploring social worlds of privacy, technology and power in the Tor Project,” Information, Communication & Society, volume 24, number 12, p. 1,728–1,744.
doi:, accessed 7 December 2023.

Common Crawl Foundation, 2022. “Overview,” at, accessed 10 January 2024.

K. Crenshaw, 1989. “Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics,” University of Chicago Legal Forum, volume 1989, number 1, article 8, and at, accessed 10 January 2024.

C. Criado Perez, 2019. Invisible women: Exposing data bias in a world designed for men. London: Penguin Random House.

L. DeNardis and F. Musiani, 2016. “Governance by infrastructure,” In: F. Musiani, D.L. Cogburn, L. DeNardis, N.S. Levinson (editors). The turn to infrastructure in Internet governance. New York: Palgrave Macmillan, pp. 3–21.
doi:, accessed 4 January 2024.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, 2019. “BERT: Pre-training of deep bidirectional transformers for language understanding,” Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pp. 4,171–4,186.
doi:, accessed 4 January 2024.

S. Dillon, 2020. “The Eliza Effect and its dangers: From demystification to gender critique,” Journal for Cultural Research, volume 24, number 1, pp. 1–15.
doi:, accessed 4 January 2024.

J. Donia and J. Shaw, 2021. “Co-design and ethical artificial intelligence for health: Myths and misconceptions,” AIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, p. 77.
doi:, accessed 4 January 2024.

V. Eubanks, 2018. Automating inequality: How high-tech tools profile, police and punish the poor. New York: St. Martin's Press.

FlowGPT, 2023. “JesusGPT: The divine dialogue,” at, accessed 11 December 2023.

F. Fossa and I. Sucameli, 2022. “Gender bias and conversational agents: An ethical perspective on social robotics,” Science and Engineering Ethics, volume 28, article number 23.
doi:, accessed 4 January 2024.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2020. “Generative adversarial networks,” Communications of the ACM, volume 63, number 11, pp. 139–144.
doi:, accessed 4 January 2024.

T. Hastie, R. Tibshirani, and J. Friedman, 2008. The elements of statistical learning: Data mining, inference, and prediction. Second edition. New York: Springer.
doi:, accessed 4 January 2024.

H. Hester, 2017. “Technology becomes her,” New Vistas, volume 3, number 1, pp. 46–50, and at, accessed 4 January 2024.

T. Hossain, S. Dev, and S. Singh, 2023. “MISGENDERED: Limits of large language models in understanding pronouns,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, volume 1: Long papers, pp. 5,352–5,367.
doi:, accessed 4 January 2024.

D. Hovy and S. Prabhumoye, 2021. “Five sources of bias in natural language processing,” Language and Linguistics Compass, volume 15, number 8, e12432.
doi:, accessed 4 January 2024.

Hugging Face, 2022. “BLOOM,” at, accessed 20 December 2022.

C. Huyen, 2019. “Evaluation metrics for language modeling” (18 October), at, accessed 20 December 2022.

K. Jing and J. Xu, 2019. “A survey on neural network language models,” arXiv:1906.03591 (9 June).
doi:, accessed 4 January 2024.

E.S. Jo and T. Gebru, 2020. “Lessons from archives: Strategies for collecting sociocultural data in machine learning,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 306–316.
doi:, accessed 4 January 2024.

D. Jurafsky and J.H. Martin, 2008. Speech and language processing: An introduction to natural language processing, computational linguistics and speech recognition. Second edition. London: Pearson.

V. Kolias, I. Anagnostopoulos, and E. Kayafas, 2014. “Exploratory analysis of a terabyte scale Web corpus,” 20th IMEKO TC4 International Symposium and 18th International Workshop on ADC Modelling and Testing, pp. 726–730, and at, accessed 4 January 2024.

H. Li, 2022. “Language models: Past, present, and future,” Communications of the ACM, volume 65, number 7, pp. 56–63.
doi:, accessed 4 January 2024.

L. Lucy and D. Bamman, 2021. “Gender and representation bias in GPT-3 generated stories,” Proceedings of the Third Workshop on Narrative Understanding, pp. 48–55.
doi:, accessed 4 January 2024.

D. Luitse and W. Denkena, 2021. “The great transformer: Examining the role of large language models in the political economy of AI,” Big Data & Society (29 September).
doi:, accessed 4 January 2024.

T. Männistö-Funk and T. Sihvonen, 2018. “Voices from the uncanny valley: How robots and artificial intelligences talk back to us,” Digital Culture & Society, volume 4, number 1, pp. 45–64.
doi:, accessed 4 January 2024.

J. Manyika, 2022. “An overview of Bard: An early experiment with generative AI,” Google, at, accessed 4 January 2024.

S. Natale and A. Ballatore, 2020. “Imagining the thinking machine: Technological myths and the rise of artificial intelligence,” Convergence, volume 26, number 1, p. 3–18.
doi:, accessed 4 January 2024.

J. Nichols, H.W.H. Chan, and M.A.B. Baker, 2019. “Machine learning: Applications of artificial intelligence to imaging and diagnosis,” Biophysical Reviews, volume 11, number 1, pp. 111–118.
doi:, accessed 4 January 2024.

OpenAI, 2023. “JesusGPT,” at, accessed 11 December 2023.

OpenAI, 2018. “OpenAI charter” (9 April), at, accessed 12 December 2023.

N. Oudshoorn and T. Pinch, 2003. “Introduction: How users and non-users matter,” In: N. Oudshoorn and T. Pinch (editors). How users matter: The co-construction of users and technology. Cambridge, Mass.: MIT Press.
doi:, accessed 4 January 2024.

R. Patel, 2021. “Softmax & argmax,” at, accessed 21 December 2022.

S. Pfotenhauer, B. Laurent, K. Papageorgiou, and J. Stilgoe, 2021. “The politics of scaling,” Social Studies of Science, volume 52, number 1, pp. 3–34.
doi:, accessed 10 January 2024.

M.M.A. Qudar and V. Mago, 2020. “TweetBERT: A pretrained language representation model for Twitter text analysis,” arXiv:2010.11091 (17 October).
doi:, accessed 4 January 2024.

A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, 2018. “Language models are unsupervised multitask learners,” at, accessed 10 January 2024.

C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P.J. Liu, 2020. “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine Learning Research, volume 21, number 1, article number 140, pp. 5,485–5,551.

A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, 2021. “Zero-shot text-to-image generation,” Proceedings of the 38th International Conference on Machine Learning, volume 139, pp. 8,821–8,831, and at, accessed 4 January 2024.

S. Reed, K. Żołna, E. Parisotto, S. Gómez Colmenarejo, A. Novikov, G. Barth-Maron, M. Giménez, Y. Sulsky, J. Kay, J.T. Springenberg, T. Eccles, J. Bruce, A. Razavi, A. Edwards, N. Heess, Y. Chen, R. Hadsell, O. Vinyals, M. Bordbar, and N. de Freitas, 2022. “A generalist agent,” Transactions on Machine Learning Research at, accessed 4 January 2024.

S. Russell and P. Norvig, 2010. Artificial intelligence: A modern approach. Third edition. Harlow: Pearson Education.

L. Salewski, S. Alaniz, I. Rio-Torto, E. Schulz, and Z. Akata, 2023. “In-context impersonation reveals large language models’ strengths and biases,” arXiv:2305.14930 (24 May).
doi:, accessed 11 December 2023.

G. Sanderson, 2017. “But what is a neural network?” (5 October), at, accessed 19 December 2022.

S. Sharma, 2020. ”A manifesto for the broken machine,” Camera Obscura, volume 35, number 2, pp. 171–179.
doi:, accessed 4 January 2024.

E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng, 2019. “The woman worked as a babysitter: On biases in language generation,” Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the Ninth International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3,407–3,412.
doi:, accessed 4 January 2024.

J. Shihadeh, M. Ackerman, A. Troske, N. Lawson, and E. Gonzalez, 2022. “Brilliance bias in GPT-3,” 2022 IEEE Global Humanitarian Technology Conference (GHTC)..
doi:, accessed 10 January 2024.

J. Simon, 2021. “Large language models: A new Moore’s Law?” at, accessed 30 November 2021.

S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari, J. Casper, Z. Liu, S. Prabhumoye, G. Zerveas, V. Korthikanti, E. Zhang, R. Child, R.Y. Aminabadi, J. Bernauer, X. Song, M. Shoeybi, Y. He, M. Houston, S. Tiwary, and B. Catanzaro, 2022. “Using DeepSpeed and Megatron to train Megatron-Turing NLG 530B, A large-scale generative language model,” arXiv:2201.11990 (28 January).
doi:, accessed 20 December 2022.

I. Solaiman and C. Dennison, 2021. “Process for adapting language models to society (PALMS) with values-targeted datasets,” 35th Conference on Neural Information Processing Systems (NeurIPS 2021), at, accessed 10 January 2024.

S.L. Star, 1999. “The ethnography of infrastructure,” American Behavioral Scientist, volume 43, number 3, pp. 377–391.
doi:, accessed 4 January 2024.

S.L. Star and K. Ruhleder, 1994. “Steps towards an ecology of infrastructure: Complex problems in design and access for large-scale collaborative systems,” CSCW ’94: Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, pp. 253–264.
doi:, accessed 10 January 2024.

H. Suresh and J. Guttag, 2021. “A framework for understanding sources of harm throughout the machine learning life cycle,” EAAMO ’21: Equity and Access in Algorithms, Mechanisms, and Optimization, article number 17, pp. 1–9.
doi:, accessed 4 January 2024.

M. Suzuki and Y. Matsuo, 2022. “A survey of multimodal deep generative models,” Advanced Robotics, volume 36, numbers 5–6, pp. 261–278.
doi:, accessed 4 January 2024.

A. Tamkin, M. Brundage, J. Clark, and D. Ganguli, 2021. “Understanding the capabilities, limitations, and societal impact of large language models,” arXiv:2102.02503 (4 February).
doi:, accessed 8 June 2022.

S.L. Thomas, D. Nafus, and J. Sherman, 2018. “Algorithms as fetish: Faith and possibility in algorithmic work,” Big Data & Society (9 January).
doi:, accessed 4 January 2024.

R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.-T. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H.S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, V. Zhao, Y. Zhou, C.-C. Chang, I. Krivokon, W. Rusch, M. Pickett, P. Srinivasan, L. Man, K. Meier-Hellstern, M.R. Morris, T. Doshi, R.D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Rajakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. Aguera-Arcas, C. Cui, M. Croak, E. Chi, and Q. Le, 2022. “LaMDA: Language models for dialog applications,” arXiv:2201.08239 (20 January).
doi:, accessed 20 December 2022.

H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, 2023. “LLaMA: Open and efficient foundation language models,” Hugging Face (27 February), at, accessed 4 January 2024.

L. Weidinger, J. Uesato, M. Rauh, C. Griffin, P.-S. Huang, J. Mellor, A. Glaese, M. Cheng, B. Balle, A. Kasirzadeh, C. Biles, S. Brown, Z. Kenton, W. Hawkins, T. Stepleton, A. Birhane, L.A. Hendricks, L. Rimell, W.S. Isaac, J. Haas, S. Legassick, G. Irving, and I. Gabriel, 2022. “Taxonomy of risks posed by language models,” FAccT ’22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 214–229.
doi:, accessed 4 January 2024.

M. Weiser, 1991. “The computer for the 21st century,” Scientific American, volume 265, number 3, pp. 94–104.

K. Yeung, 2017. “‘Hypernudge’: Big data as a mode of regulation by design,” Information, Communication & Society, volume 27, number 1, pp. 118–136.
doi:, accessed 4 January 2024.

M. Zaib, Q. Sheng, and W.E. Zhang, 2020. “A short survey of pre-trained language models for conversational AI — A new age in NLP,” ACSW ’20: Proceedings of the Australasian Computer Science Week Multiconference, article number 11, pp. 1–4.
doi:, accessed 4 January 2024.

W.E. Zhang, Q. Sheng, A. Alhazmi, and C. Li, 2020. “Adversarial attacks on deep-learning models in natural language processing: A survey,” ACM Transactions on Intelligent Systems and Technology, volume 11, number 3, article number 24, pp. 1–41.
doi:, accessed 4 January 2024.


Editorial history

Received 30 December 2023; accepted 4 January 2024.

Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Notes towards infrastructure governance for large language models
by Lara Dal Molin.
First Monday, Volume 29, Number 2 - 5 February 2024