First Monday

Mysterious and manipulative black boxes: A qualitative analysis of perceptions on recommender systems by Jukka Ruohonen



Abstract
Recommender systems are used to provide relevant suggestions on various matters. Although these systems are a classical research topic, knowledge is still limited regarding the public opinion about these systems. Public opinion is also important because the systems are known to cause various problems. To this end, this paper presents a qualitative analysis of the perceptions of ordinary citizens, civil society groups, businesses, and others on recommender systems in Europe. The dataset examined is based on answers submitted to a consultation about the Digital Services Act (DSA) recently enacted in the European Union (EU). Therefore, not only does the paper contribute to the pressing question about regulating new technologies and online platforms, but it also reveals insights about the policy-making of the DSA. According to the qualitative results, Europeans have generally negative opinions about recommender systems and the quality of their recommendations. The systems are widely seen to violate privacy and other fundamental rights. According to many Europeans, these also cause various societal problems, including even threats to democracy. Furthermore, existing regulations in the EU are commonly seen to have failed due to a lack of proper enforcement. Numerous suggestions were made by the respondents to the consultation for improving the situation, but only a few of these ended up to the DSA.

Contents

1. Introduction
2. Background
3. Materials and methods
4. Results
5. Conclusion

 


 

1. Introduction

Recommender systems are computational software solutions for providing suggestions that are most relevant for a particular user. These suggestions vary from an application domain to another; these may refer to recommendations about what to purchase, what news to consume, what music to listen, and so forth and so on. Besides being a classical research topic in computer science, recommender systems have long been important for delivering relevant information from the vast sources of the Internet. These are also important for companies and their business intelligence, including their online advertising. When a purchase is nowadays made, an accompanying advertisement typically follows on an online platform.

However, recommender systems have long been scrutinized and criticized for their various ethical lapses (Milano, et al., 2020). Privacy is typically and inevitably violated because particularly the newer systems are based personalization. In addition, concerns have frequently been raised about the accuracy and quality of recommendations, their fairness and accountability, and explainability and transparency of the systems. Recently, these systems have been seen to also cause di erent individual harms and lead to di erent societal threats, including but not limited to those for fundamental rights and even democracy.

To this end and other ends, the EU enacted the DSA, that is, Regulation (EU) 2022/2065, in 2022. The general enforcement date is set to 2024. This regulation covers numerous distinct issues related to online platforms. Also recommender systems are covered. Thus, this paper examines the perceptions of ordinary EU citizens, civil society groups who typically represent them particularly on matters related to rights and technology, businesses, and others about recommender systems based on the open consultation that was held for the DSA.

The motivation for the paper as well as its contribution are two-fold. First, there is only a little qualitative research on the public perceptions of recommender systems. Although quantitative surveys have been conducted in recent years, qualitative insights have been lacking. Therefore, the paper’s qualitative approach and results patch an important gap in existing research. As will be shown, the qualitative observations reveal interesting insights about what people and various stakeholders “really think” as compared to their answers to some Likert scales in surveys.

Second, the paper contributes to the vast, timely, politically hot, and pressing research domain on the regulation of new technologies, whether artificial intelligence or face recognition, and the so-called Big Tech companies behind these. While it is beyond the scope of this paper to delver deeper into this topic, a motivating point can be made with a relation to technology ethics.

There has been an interesting recent debate among ethics scholars and ethicists regarding their stances on regulation. Besides the issues with ethics washing and ethics bashing (Bietti, 2020), some have recently argued that rather than pursuing “ethics-from-within” and promoting self-regulation, technology ethicists should take the regulatory power of governments into account as a viable solution for solving foundational problems (Sætra, et al., 2022). Others have disagreed, arguing along the lines that political philosophy and politics are not without their own problems; there are political elites who can be likened to Big Tech companies, politicians are vote-seekers who do not understand business realities, public authorities are not always enforcing laws justly, and so forth (Chomanski, 2021). What is striking in this ethics debate is the lack of commonsense connections to the foundations of social sciences, including political science and law.

When foundations of liberal democracies, such as fundamental rights and democracy itself, are under a threat by technologies and their use, law is expected to intervene (Hildebrandt, 2020), regardless of what ethicists think and say. While privacy and data protection violations are good examples on the threats to fundamental rights, a good and timely, but hardly the only, example of a threat to democracy is online election interference by foreign or domestic bad actors. Furthermore, in liberal democracies it is the people, not ethicists, who possess the ultimate power to decide over the laws that they perceive as necessary. This cornerstone applies even when keeping in mind the necessary delegation of power in representative democracies and the EU’s enduring democracy deficit. With this point in mind, the paper’s qualitative observations about opinions and perceptions offer a lot to ponder also for ethicists.

For everyone, a central tenet throughout the paper is a hypothetical skepticism of particularly European citizens and civil society groups toward recommendation systems. Such skepticism was already present during the negotiations of the General Data Protection Regulation (GDPR). Later on, signs of skepticism, which to some extent and implicitly correlate with so-called neoluddism (Humberstone, 2023), have been present during the negotiations of the EU’s recent technology regulations, including the DSA. However, in politics skepticism and critical viewpoints are one thing and business and other realities another. Thus, the central tenet must be evaluated against other interests and the outcome from the policy-making, the actual DSA. On these notes, the paper’s background should be better elaborated.

 

++++++++++

2. Background

The existing research on recommender systems is vast in computer science. Recently, relevant contributions have been made also in numerous other fields, including social sciences. Therefore, it suffies to only briefly skim through a few relevant studies about public opinion and perceptions; further relevant studies are pointed out during the presentation of the results. Thus, to begin with, several empirical studies have recently been conducted about the perceptions of people on algorithms and algorithmic decision-making.

The underlying questions are typically framed around fairness and trust. According to literature reviews, however, there is no consensus over the definitions for these two concepts, measurements on these vary from a study to another, and results are generally ambiguous, indicating, for instance, that humans are viewed more fairer compared to algorithms or the other way around (Starke, et al., 2022; Treyger, et al., 2023). Although it remains unclear whether fairness can be even formalized mathematically (Buyl and De Bie, 2024), some studies indicate a middle-ground, finding support for an observation that algorithmic decision-making and human-based decisions are both perceived as equally fair and trustworthy in mechanical tasks (Lee, 2018). Regarding artificial intelligence more generally, such factors as accountability, fairness, security, privacy, accuracy, and explainability have been observed to matter regarding people’s perceptions (Kieslich, et al., 2022; Treyger, et al., 2023). Analogous studies have been conducted in the context of recommender systems.

A number of distinct dimensions and variables have been considered for evaluating recommender systems. These include at least the following: the perceived variety in recommendations, the accuracy and quality of recommendations, the effort required to use a given system, the perceived effectiveness, efficiency, and enjoyment, the difficulty in making choices based on the recommendations, trust placed upon the systems and skepticism expressed toward these, the availability of functionality for users to contribute to the rankings and recommendations, scrutability of the systems, user interface designs for the systems, compliance of the systems with regulations, counterfactual recommendations, domain knowledge, and privacy concerns (Knijnenburg, et al., 2011; Martijn, et al., 2022; Pu, et al., 2012; Shang, et al., 2022). As recommender systems are widely perceived as black boxes by people, particular emphasis has been placed upon controllability (the ability for users to contribute) and explainability, the latter focusing on making the recommendation process and the reasons behind particular recommendations clearer (Tsai and Brusilovsky, 2021). Transparency has often been seen as the primary way to achieve explainability or at least improve it. Transparency also received a specific focus in the DSA.

Several surveys have been conduced with transparency in mind. Although the causal presumptions in these survey studies are often ambiguous with little uniformity across studies, transparency has been observed to improve people’s perceptions on the privacy and fairness of algorithmic decision-making in general (Aysolmaza, et al., 2023). Transparency of recommender systems also affects user satisfaction (Gedikli, , et al., 2014). It further fosters trust placed upon the systems (Shin, Rasul, et al., 2022). Trust, in turn, moderates privacy concerns (Shin, Zaid, et al., 2022). There are some notable weaknesses in these studies. For instance, the underlying assumption seems to be that privacy is merely a concern of people; that privacy violations would somehow disappear with improved transparency. In other words, perceptions and people’s opinions do not necessarily match reality; privacy may be severely violated even in case people have no concerns over it. This kind of reasoning is present also in online advertisement research. Among other things, it has been argued that privacy perceptions are malleable, which allows advertisers to use different tactics and tricks to counter privacy concerns (van den Broeck, et al., 2020). Such reasoning is not followed in computer science research on privacy. Nor is it the logic of data protection law.

Two additional brief points are warranted. First, as argued by Lessig in a recent interview, recommender systems and generative artificial intelligence are largely the same thing at the moment; they will be or already are “deployed hand in glove in order to achieve the objective of the person deploying them” (Patel, 2023; cf., also Kapoor and Narayanan, 2023). Note that this particular person may not refer to a natural person. As will be shown, also recommender systems, online advertisements, and privacy are closely — if not inseparably — interlinked. The second point follows: an important limitation in existing research is about the harms caused by recommender systems and the societal threats they pose. Although there is empirical research on the public perception of threats caused by artificial intelligence (Kieslich, et al., 2021), the perceptions of citizens and other stakeholders about the societal threats caused by recommender systems has received less attention. In fact, according to a reasonable literature search, there are no directly comparable previous works on this topic. Therefore, the paper fills an important gap in existing research. It also contributes to the ongoing political debate around the DSA.

 

++++++++++

3. Materials and methods

3.1. Data

The data examined is based on the responses to the DSA’s open consultation initiated by the European Commission (2020). The consultation period ran from June 2020 to September 2020. In total, 2863 valid responses were received. The answers solicited in the consultation covered the full range of the DSA: illegal and harmful content, disinformation, systematic societal threats such as the COVID-19 pandemic, content moderation and algorithms thereto, reporting practices for illegal content, information sources for forensics, dispute resolving for content takedowns and account suspensions, protection of minors, platforms and marketplaces established outside of the EU, transparency reports released by companies, data sharing between third-parties, disclosure of data to competent authorities, threats to fundamental rights, platform liability, specific questions about existing laws, unfair business practices of large platforms and their gate-keeping roles, online advertising, media pluralism, and so forth.

There were also two specific open-ended questions about recommender systems. The answers given to these provide the empirical material for the examination. The two questions are:

  1. “When content is recommended to you — such as products to purchase on a platform, or videos to watch, articles to read, users to follow — are you able to obtain enough information on why such content has been recommended to you? Please explain.”

  2. “In your view, what measures are necessary with regard to algorithmic recommender systems used by online platforms?”

The answers to these two questions were analyzed together; no attempts were made to separate the answers and opinions on the availability of information from the solution proposals. It should be also mentioned that many answers were given in languages other than English. These were machine-translated with Google Translate. On that note, it should be further emphasized that the answers submitted to the open policy consultation were not bound to Europe. For instance many non-European, globally operating technology companies and their lobby groups submitted their own answers.

3.2. Methods

Qualitative analysis was used for tackling the answers to the two broad and predefined questions used in the survey about recommender systems. On one hand, the answers were relatively short; typically only a few sentences were provided by the respondents, although longer answers with several paragraphs were present. On the other hand, there were nearly 3,000 answers in total. The nature of the dataset limited the scope of suitable qualitative methods; neither narratives nor discourses could be found from the answers, and so forth. Thus, the analysis was built on three well-known methods: qualitative content analysis, thematic analysis, and grounded theory.

Inductive logic was used for all three internally. However, externally, the three methods were applied sequentially: grounded theory followed the thematic analysis, which, in turn, followed the results from the qualitative content analysis. This type of between-method triangulation is fairly common in qualitative research (Flick, 2004). In the present context it put the three methods into a complementary relationship; the three methods complement each others reciprocally.

In essence, a conventional variant of qualitative content analysis seeks to find latent patterns and constructs by identifying key concepts and coding categories from textual data during the analysis (Hsieh and Shannon, 2005). As this method is prone to reduce to mere counting (Morgan, 1993), which does not align well with the rationale of qualitative analysis — given its goal of providing nuanced and thick explanations, a thematic analysis was further used as a subsequent method.

The thematic method is highly similar to the qualitative content analysis method. In essence: given the categories and counts already at hand, the answers were re-read in order to specify the key themes characterizing the dataset through clustering related answers together (Crowe, et al., 2015). Finally, the grounded theory variant adopted takes the key themes initially for granted, but does not consider these as a theory, which requires transcending beyond plain empirical descriptions (Apramian, et al., 2017). Accordingly, theory requires a refinement of the key themes such that an emerging theoretical core becomes visible (Heath and Cowley, 2004). As the analysis is about a regulation in the EU, it is also the realm of European policy and politics to which the grounded theory variant adopted seeks to transcend the key themes. Because the DSA has already been enacted, actual policy recommendations were still kept to a minimum, but the transcended themes still contribute to the practical policy realm. In particular, the forthcoming enforcement of the new European regulation is being closely watched globally.

In addition to the triangulation, the trustworthiness of the qualitative results was improved by following the so-called principle of transparency (cf., Ruohonen, et al., 2018). That is, each notable qualitative observation or claim was backed with an explicit reference to the dataset. These numerical references refer to the rows in the (MS Excel) dataset. As the data is openly available from the EU, this referencing allows also easy replication checks.

 

++++++++++

4. Results

The results are presented according to the key themes obtained through the thematic analysis, which, to recall, was conducted after the qualitative content analysis. The key themes are: lack of information about recommender systems (Section 4.1), privacy (Section 4.2), effectiveness of the recommendations (Section 4.3), harms and threats caused by recommender systems (Section 4.4), proposals for countering some or all of the harms and threats (Section 4.5), concerns about the new regulation (Section 4.6), and the DSA’s already ratified response (Section 4.7). When possible and appropriate, the presentation of these themes is accompanied with concise references to the academic research literature. It should be also noted that the last theme in Section 4.7 is not a part of the dataset; instead, it is a critical reflection obtained through the grounded theory approach. It provides a key reference point for the overall reflection in the concluding Section 5.

4.1. Black boxes

Many — if not the majority — of individual EU citizens who responded to the consultation expressed negative opinions about recommender systems. There were many signs in the dataset about lack of knowledge, apathy, powerlessness, fear, and anger toward algorithms and their recommendations.

Recommender systems are “a complete black-box for me”. [1] Their recommendations were perceived as being mysterious by many respondents [2]. These just appear [3]. They are just out there [4]. It is baffling, very odd, and really strange [5]. Therefore, people were suspicious about recommender systems [6]. Some mistrust the systems and oppose algorithms [7]. The recommendations given by these were annoying, surprising, and frightening to many [8]. The recommendation process “scares me a bit, to be honest”. [9] The systems also manipulate people and deceive the public [10]. They are misleading, roach-like, and unlawful [11]. Their recommendations are cheeky [12], unwanted [13], obscure [14], creepy [15], dangerous [16], unhealthy [17], harmful [18], inappropriate [19], sexist [20], and offensive [21]. These recommendations further make it difficult to disentangle true from false [22]. Some have fallen to scams because of the recommendations [23]. The skepticism expressed does not stop here.

Many people did not want to be recommended by algorithms [24]. Nevertheless, recommender systems were still forced upon them by coercive business practices [25]. These systems pollute the Internet [26]. The never-ending ads blight everything [27]. Such pollution makes it understandable why some people were fundamentally opposed to individualized advertising, but these people had no reasonable way for expressing their position [28]. Some have still tried to suppress the recommendations [29]. Some wanted more information so that they could block these [30]. At the same time, many companies thought that consumers “are interested in personalization”. [31] The same assumption about personalization is often made in academic research — and again without empirical backing or other proper justifications (Shin, Zaid, et al., 2022). These diverging opinions underline the conflicting interests between platforms and their consumers or non-paying users.

There is a lack of information about recommender systems and the algorithms used in these. Although some said that partial or full information was available about recommender systems and their recommendations, the majority of respondents expressed the opinion that there was no good information available (see Table 1). Even when some information is provided, it is generally insufficient according to respondents.

 

Table 1: Availability of information.
AnswerFrequency
No sufficient information583
Partially or fully sufficient information58

 

When citizens went to look for more information, they indeed often concluded that the information provided was either too hard to understand or too general, vague, ambiguous, misleading, and disguised [32]. These also lack translations to other languages than English [33]. Some have tried to directly contact online advertisers for information [34]. But often “it is impossible for users to trace them”. [35] Therefore, transparency should be increased for allowing to identify who and which organizations are pushing the recommendations [36]. Although access to advertising profiles has been seen as important for data protection rights (Hildebrandt, 2009), according to many respondents, it is impossible to have a control over a profile generated for advertising [37].

Some have tried to exercise their data protection rights granted by the GDPR. However, the results were often disappointing; “data is never made available by companies displaying or producing recommendations, and is not available in GDPR requests”. [38] Analogously, a citizen expressed disappointment about both platforms and data protection authorities after having dealt with a content deletion request with two authorities in two different countries without success [39]. Furthermore, many people noted that the choices provided by some platforms did not make a difference; they still received personalized advertisements even though they had refused to give their consent [40]. Using the choices merely meant that the same amount of ads was received but on slightly different topics [41]. In other words, violations of the GDPR have been widespread and these likely continue today.

An important further point is that the “why am I seeing this” {type of functionality offered by some platforms was widely seen as insufficient for users, researchers, and public authorities to understand recommendations [42]. Unsurprisingly, a major industry lobby group stated the exact opposite [43]. However, it is the critical viewpoint that receives support from academic research; the transparency functionality of some platforms has indeed been concluded to be insufficient, misleading, incomplete, and vague (Andreou, et al., 2018). The respondents also pointed out that the functionality provided by other platforms is seldom actually used by people, possibly due to the explicit reliance on deceptive user interface designs [44]. Usability issues have been recognized also in recent research (Armitage, et al., 2023). This point was further raised by citizens; information is difficult or even impossible to access and it is too broad [45]. All in all, the apparent lack of transparency is a key element in the general skepticism expressed by the respondents.

4.2. Tracking and privacy

Privacy on the World Wide Web has continuously declined throughout the past 20 years. Therefore, it is not surprising that tracking was also widely acknowledged by the respondents; “it seems obvious to me that algorithms are tracking me”. [46] Many felt that they were under surveillance by algorithms of unknown entities who harvested their personal data [47]. These entities rob people’s personal details [48]. The tracking was generally seen as intrusive by many [49]. In other words, it felt like spying [50]. It was like “espionage on a very large scale”. [51] Some people did not want to be visible to everyone in the Internet [52]. But there was no way to escape [53]. It was impossible to opt-out from tracking and receiving of recommendations [54]. While people wanted to choose whether to rely on algorithmic recommendations, the only option was to ignore the suggestions [55]. There were no means to challenge the recommendations [56].

The general public also lacks knowledge [57]. Indeed, the answers reveal differences in people’s technical knowledge about tracking and its countermeasures. Some merely said that it was all about algorithms [58]. They know things [59]. They do things [60]. Different platforms are somehow linked together but it is impossible to understand how [61].

For others, the starting point for trying to understand tracking was clear: “I think it is related to cookies”. [62] In fact, these “cursed cookies are spread everywhere”. [63] Some have tried to mitigate the situation by deleting cookies after each browsing sessions and relying on ad blockers and virtual private networks [64]. Even then, some had to acknowledge that they still received personalized advertisements [65]. Others acknowledged that “accepting cookies is much faster than restricting or refusing cookies on most websites”. [66] Although some stated that the so-called cookie banners contain sufficient information for understanding personalized advertisements, it can be argued that the consent forms and banners brought by the GDPR are neither usable nor working well [67]. “I would have to switch off  each service individually” so that “it would take hours until I worked my way through to the switch-off/reject option”. [68] According to recent academic research, the optout functionality is not even working properly in most Web sites (Liu, et al., 2022). Even if it would be and time would be taken to switch the cookies off, Web sites would ask again in a few weeks [69].

Besides cookies, many suspected that the recommendations were based on their browsing histories and previous searches on search engines [70]. These are both correct presumptions. Many suspected that recommendations were further based on their past actions and the content consumed by their contacts and other users on the same platform [71]. Again, these suspicions are correct.

Due to their reliance on intrusive but still often inaccurate tracking techniques, recommender algorithms were seen as opaque and inscrutable [72]. The opacity left plenty of room for speculations in all directions [73]. As has been common among social media users (Eslami, et al., 2016), such speculations were also presented by some respondents. For instance, someone’s recommendations were “related to products my wife bought”. [74] Some speculated that there must be something intentional behind the recommendations of conspiracy theories and hateful content after having previously consumed content on social justice and the climate change [75]. Another respondent wondered why he or she was getting recommended climate denialism after having watched videos on the climate change, concluding that platforms engage in the dissemination of propaganda [76]. The same went for viewing feminist content after which misogynist content soon started to follow on a platform [77]. Others speculated further.

Was a purchase made by my bank account shared from my bank with my eBook seller and fed into the recommendation?”, asked one respondent who further continued to speculate other possibilities that “include the use of data from tracking cookies, tracking pixels and browser fingerprinting”. [78] Tracking pixels, third-party JavaScript code and analytics, browser fingerprinting, and embedded videos and other content were a concern raised also by many others [79]. All are also well-known and well-analyzed tracking techniques in academic privacy research (Bekos, et al., 2023; Bermejo-Agueda, et al., 2023; Laperdrix, et al., 2020; Ruohonen and Leppänen, 2018, 2017; Ruohonen, et al., 2018). But nowadays privacy infringing tracking techniques extend well-beyond the Web.

Among other things, algorithms “sometimes also listen in on the mic”. [80] This listening presumption was shared also by many others who felt that it is “like my phone is secretly listening” or that “there is software listening to what I am saying”. [81] Smartphones “hear” telephone conversions and “pick up discussions”, subsequently using this information for advertisements [82]. Such listening is unacceptable and horrifying [83]. Given past privacy scandals and the ongoing practices with voice assistant technologies used in smartphones and other products (Edu, et al., 2020; Iqbal, et al., 2023; Tabassum, et al., 2020), it is difficult to objectively reassure individulas that they are just paranoid; that unauthorized algorithmic monitoring of phone calls for advertising would be entirely out of the question.

Finally, two other well-known privacy-related topics were present in the dataset. The first is information asymmetry between technology companies and users (Hjerppe, et al., 2023). This asymmetry is concisely summarized by the following comment; “it is difficult for us but easy for them”. [84] This asymmetry extends toward a wider information inequality between platforms and everyone else, including citizens, researchers, civil society, and public authorities [85]. To this end, some have used a term epistemic inequality to describe the situation (Zuboff, 2020). Some citizens proposed a simple solution to the inequality: consumers and people in general “should have the same level of insight as the advertisers who selected the targets for their ads”. [86] The second topic is the famous privacy paradox: people recognize that they are under surveillance but do little to protect themselves (Barnes, 2006). In other words, the “process of collecting this data and its use is not clear, which, however, does not worry the majority of Internet users, who without hesitation agree to the terms and conditions of various services”. [87] From a European perspective, it is not the duty of citizens alone to ensure that their fundamental rights are respected. Regardless, privacy and data protection are further key elements behind the overall skepticism expressed by the respondents toward recommender systems.

4.3. Recommendation effectiveness

Some representatives from media companies stated that algorithms are good at delivering relevant advertisements and promoting quality content [88]. Only a couple of citizens agreed, noting that recommender systems offered exactly what was looked after [89]. Many others expressed the opposite opinion; they were constantly bombarded with advertisements on products and services they were not interested in [90]. Numerous other respondents felt that recommendations and advertisements were often random, incorrect, and entirely unrelated to their interests [91]. This alleged lack of recommendation effectiveness may have contributed toward the skepticism.

An illuminating example would be employees working on alcohol policy and alcohol-related treatment who were frequently targeted with content about alcoholic beverages [92]. These were often also offensive to people; before having died from cancer, someone’s mother had received a continuous stream of ads on miracle workers, coffins, and burials [93]. As for the incorrect recommendations, someone said that he or she was not interested in females but still received ads on mail-order brides [94]. Others went to a jazz concert and was later advertised insurance schemes [95]. Such incorrect recommendations make platforms look stupid according to some [96]. Garbage in, garbage out, as one respondent aptly summarized the situation for many consumers [97].

Many respondents stated that they were immune to being influenced by recommender systems [98]. According to some, people can always use their own expertise for judgments and they always have the choice on what to click [99]. Some only relied on recommendations if these came from their own contacts or known third-parties [100]. Advertisements were also seen as easily recognizable and harmless as such [101]. These were a joke to some [102]. Others simply did not care [103]. It was just spamming [104]. These few opinions notwithstanding, some respondents started from a presumption that recommender systems and their algorithms are incredibly pervasive, possessing the capability to influence interests, opinions, and behaviors, including social group formation [105]. Indeed, there is a whole branch of academic literature on persuasion and persuasive recommender systems that specifically seek to change attitudes, behaviors, or both (Cremonesi, et al., 2012; Teppan and Zanker, 2015). The basis for the presumption becomes also evident when taking a look at the harms recommender systems cause for individuals and the societal threats they nowadays allegedly entail.

4.4. Individual harms and societal threats

In addition to the already realized threat to privacy and other fundamental rights, numerous different harms and threats associated with recommender systems were pointed out by respondents. Both individual harms and societal threats were raised. As can be seen from Table 2, the most common concern was about disinformation, misinformation, and hate speech.

It should be mentioned that there were several identical answers in this regard. These answers were submitted in English and particularly in German and French. Although traceability is impossible without further data, it may be that a campaign was launched somewhere on the Internet for delivering this particular message to the DSA’s open consultation. Nevertheless, there were still hundreds of unique and genuine answers about disinformation, misinformation, propaganda, hate speech, and related issues. The concern is real and pressing according to many EU citizens.

 

Table 2: Harms and threats.
CategoryFrequency
Disinformation, misinformation, and hate speech697
Illegal, harmful, and offensive content or goods41
Algorithmic biases and discrimination31
Democracy, politics, elections, and polarization26
Engagement, amplification, and emotionality21
Competition, Big Tech, and market imperfections21
Intellectual property and copyright infringements20
Commercial promotion and priority rankings16
Radicalization and extremism11
Failure of the P2B regulation7
Failure of the GDPR4
Failure of the e-commerce directive2
Consumerism and the climate change1

 

What started out as a simple idea about contacting other people and sharing photographs turned out to be a “propaganda monster”. [106] It was really shocking to many that platforms are allowed to make money from facilitating disinformation and hate [107]. It made people mad [108]. The facilitation was seen as intolerable to any healthy society [109]. The most common point in hundreds of answers on this topic was that amplification and monetization of these should be outlawed and subjected to criminal justice. In other words, not only should those who spread disinformation and hate be held responsible, but also platforms should be subjected to both financial sanctions and criminal law as enablers of this conduct [110]. As could be expected, this viewpoint of the majority was met with a concern about censorship expressed by a minority [111].

The second most common concern was about illegal and harmful content, including advertisement content for products and services. As there were numerous specific questions on this topic in the consultation, including those related to content moderation, it suffices here to only note a couple of points. The first is that recommender systems were widely seen to promote illegal and harmful content. According some, therefore, recommendations should not occur for children and young people [112]. The other point is that many representatives of businesses, media, publishers, and cultural arts argued that these systems should not recommend content that infringes copyright and violates intellectual property rights in general.

The third most common concern was about different biases that recommender systems and their algorithms have. As this concern is well recognized and extensively studied in computer science, it again suffices to only note a few general points. In general, these biases and the associated discrimination involve those that the public have, those that are present in training data, and those that the individual developers of the recommender systems have [113]. The consequences from these biases vary; disinformation and hate, so-called filter bubbles and echo chambers, radicalization, and related issues were commonly raised by the respondents [114]. On the business side, biases were also seen to involve price discrimination [115]. In a similar vein, streaming media were seen as biased toward not recommending European and culturally diverse video content [116]. Platforms were also seen as biased in promoting free and ad-funded content instead of subscription content [117]. As could be expected, they were also accused of being politically biased in their actions [118].

These various biases correlate with the fourth and fifth most common concerns raised by the respondents: the effects of recommender systems upon politics, democracy, and elections on one hand and their foundation upon engagement, amplification, and emotionality on the other. In terms of the former, many believed that societal fairness and democracy are under a threat because of the platforms and their “filth”. [119] “The whole system of making politics about sensations and emotions is a disaster”. [120] The Cambridge Analytica scandal and the mass killings of the Rohingya minority in Myanmar were raised as alarming precedents [121]. The United States was also seen as a cautious example of what recommender systems and algorithms can do to a society [122]. According to these critical viewpoints, algorithms are used by monopolists operating in a legal vacuum against people and their rights [123].

Indeed, according to some, the public lacks knowledge about its own psychology; therefore, it is dangerous to allow private salespeople and monopolist platforms the power to manipulate societal ideologies [124]. Such manipulation via recommender algorithms allows control of the masses [125]. These systems consequently lead to various abuses, “the most dangerous being those related to the manipulation of the masses for political and electoral purposes, which represent a risk to democracy”. [126] What allows these abuses is the business model of the platforms: the engagement that keeps people addicted to the platforms through a continuous delivery of dopamine doses, which, in turn, leads to the facilitation of disinformation, hate, polarization, radicalization, extremism, and various other societal ills [127]. The current platforms and their recommender systems are subject to “fundamental rights abuse stemming from users’ ‘engagements on steroids’, economic revenue as an underlying reasoning behind open recommendations systems and dominant market position of these actors”. [128]

Hence, the abuses are closely tied to the dominant market position of Big Tech companies. Regulating these companies “is one of the biggest challenges we face globally today”. [129] However, it is beyond the scope of this paper to delve deeper into the various market imperfections that Big Tech companies cause in Europe and elsewhere. Some examples can be still noted. For instance, many respondents noted that platforms themselves prioritize their own content and products unfairly [130]. As is well-known, media, publishers, and brands are deprived from their advertisement revenues by platforms [131]. At the same time, as was noted, citizens are deprived from their privacy and other fundamental rights. Although the DSA is only a part of the EU’s recent regulative efforts, some people had already lost all hope: instead of attempting to solve the Big Tech conundrum via regulation, the whole Internet should be rebuilt on the principles of decentralization and open source, according to them [132].

It is worth further stressing that many expressed an opinion that the EU’s previous regulative attempts have failed to deliver. In particular, businesses get away from various data protection violations because the GDPR is not properly enforced [133]. Nor does it impose any transparency requirements [134]. The many problems with the regulation’s enforcement are well-known also in academic research (Ruohonen and Hjerppe, 2022; Waldman, 2021). The recent decisions against Big Tech companies have done little to change their practices (Armitage, et al., 2023). Another regulatory failure relates to Regulation (EU) 2019/1150, which is commonly known as the P2B regulation.

According to some, this regulation has done nothing to change the negotiation imbalance between platforms and publishers, including the former’s ability to unilaterally dictate terms [135]. Even though the regulation imposes some transparency requirements for the main parameters used to classify and categorize content, it does not prevent the noted self-preference processing through which platforms promote their own content [136]. Nor has it delivered in terms of fostering transparency in practice [137]. The regulation is also limited to relationships between platforms and businesses, excluding the relationships between platforms and their users, which is relevant in terms of content delivery [138]. The problems with the outdated e-commerce Directive 2000/31/EC are also well-recognized — in fact, these were among the motivations of the European Commission for introducing the DSA (Cauffman and Goanta, 2021; Heldt, 2022). To this end, some also suggested that the e-commerce directive should be extended so that the directive’s prohibition of unsolicited e-mail ads would become the norm for all online advertising [139].

Finally, a brief reflection is required about the academic research on these issues raised by the respondents. The existing research on Big Tech companies and their engagement-based business model is extensive. The same applies for research on disinformation. No references are required for delivering these points; there are literally hundreds of relevant works on the these topics, including several monographs.

However, there is no consensus over the societal effects of these; disagreements exist regarding whether recommender systems and algorithmic solutions in general cause political polarization, echo chambers and folter bubbles, hate, radicalization, and related ills. According to recent studies, including systematic literature reviews, there exists empirical evidence for the underlying causal claims, but the evidence is still insufficient to draw definite conclusions (Banaji and Bhat, 2022; Castaño-Pulgarín, et al., 2021; Geissler, et al., 2023; Hassan, et al., 2018; Gowder, 2023; Guess, et al., 2023; Iandoli, Primario, and Zollo, 2021; Kubin and von Sikorski, 2021; Ribeiro, Ottoni, West, Almeida, and Meira, 2020; Smith, Jayne, and Burke, 2022; Terren and Borge 2021; Tontodimamma, Nissi, Sarra, and Fontanella 2021; Whittaker, Looney, Reed, and Votta, 2021; Yesilada and Lewandowsky, 2022; for non-academic articles see, e.g., Adams, 2019; Ballard, 2023; Tech Transparency Project [TTP], 2023). Recently, studies conducted on behalf of the European Commission (European Commission, Directorate-General for Communications Networks, Content and Technology, 2023; European Commission, Directorate-General for Migration and Home Affairs, 2023) found that recommender systems of most large online platforms indeed amplify foreign disinformation alongside terrorist, violent extremist, and borderline content — and the existing countermeasures still seem insufficient (European Fact-Checking Standard Network [EFCSN], 2024). Such results have prompted some to again call upon a suspension of recommender systems altogether (Irish Council for Civil Liberties [ICCL], 2023). If platforms are unable to not recommend such content, it becomes moot to argue that the problems would not be also technical, and that platforms could in theory give users a choice to choose the content they wish to see (cf., versus Kapoor and Narayanan, 2023) Furthermore and alternatively, triangulation with the results on the perception of generally poor recommendation effectiveness (see Section 4.3) allows to also question the severity of the harms and threats in practice. Be that as it may, it suffices at least to conclude that the respondents’ concerns are justified, given the nature of these as concise political opinions delivered for a policy consultation. In addition, already the number of distinct concerns allows to conclude that these have contributed to the overall skepticism expressed by the consultation’s respondents.

4.5. Solution proposals

Numerous different solutions were suggested by the respondents to the consultation. These are summarized in Table 3. The most obvious and common proposition is not listed: a transparency of algorithms. While keeping this point in mind, the second most common solution suggested was about choice; people should have a choice over whether they wanted to actually use recommender systems.

 

Table 3: Solutions proposed.
CategoryFrequency
Possibility to opt-out or opt-in only by choice60
Third-party audits, proofs, and verification50
Education and explanations for laypeople43
Prohibition of all (open) recommender systems37
Human in the loop31
Research, science, and civil society27
Fundamental rights and European values19
Enforcing the GDPR and the e-privacy directive17
Full public disclosure of all algorithms15
Promotion of media and journalism13
Platform liability for content11
Access to training data11
Risk-analysis and impact assessments9
Open source algorithms and free licenses7
Alternative ranking criteria5
Certifications for algorithms4
Standardization of algorithms4
Codes of conduct and self-regulation4
Downgrading, fact-checking, and demonetization3
Quality seals and badges2
Values for algorithms1
Content filters1

 

According to a less strict version of this choice proposal, people should have an option to optout from automated recommender systems and use alternative means for ranking. These ranking criteria include sorting chronologically, alphabetically, according to price, and so forth [140]. According to a strict version of the proposal, these alternative ranking criteria should be the default such that an explicit opt-in would be required for the reliance on recommender system algorithms. If users would opt-in, they should have a further right to object automatic recommendations [141]. Children and minors were again seen as a group who specifically should be given a mandatory opt-out or an opt-in choice [142]. In general, both the opt-out choice and the stricter opt-in choice reflect the foundational European concept of informational self-determination.

The second most common proposal was about third-party audits of recommender systems and their algorithms. According to many, these should be conducted either by competent public authorities or trusted academics [143]. Competence and expertise were emphasized in this regard; “it’s a matter of professionals”. [144] Experts should be the ones asking the questions [145]. According to many respondents, such experts should have also full access to the training data used for recommender systems. Yet, as soon further clarified in the next section, these audits should neither “reveal insights into particular users” nor endanger trade secrets [146]. Because full public disclosure allows platform manipulation, the audits should be only reserved for independent auditors bound by secrecy, composed of technical experts with research capabilities [147]. Especially politicians should steer away from audits [148]. Despite the reservations of some respondents, a large amount of citizens demanded that all recommender algorithms should be “publicly available and transparent”. [149] If such disclosure is not possible or desirable, at least all inferred data should be made available to users upon request according to some [150]. The auditing proposal was further accompanied with many ideas about practicalities. For instance, there should be an “algorithm officer” according to some [151]. Some others recommended institutionalized data-sharing partnerships with open application programming interfaces [152]. In line with the endorsements from some academics (Busch, 2023), it was also suggested that platforms should allow third-parties to develop algorithms [153]. Warrants for inspection were noted in case platforms would refuse to cooperate [154].

The third most common proposal related to education of people and different explanations for recommender systems. Particularly the explanations have been studied extensively in scholarly research for some time (Gedikli, et al., 2014; Martijn, et al., 2022; Mcsherry, 2005; Tintarev and Masthoff, 2007). Therefore, it suffices to only note a few critical points raised by the respondents. Some argued that complicated technical explanations are unlikely to be useful for users [155]. At the same time, like some academics have done (Ruohonen, 2023a), others argued that a mere summary of parameters used in machine learning models is not sufficient [156]. It was also noted that many terms such as explainability and interpretability are still vague — an argument shared also by some scholars [157]. Given such criticism, some recommended different levels of explanations to different audiences; “basic, applied, expert, academic”. [158] Such suggestions aligned with arguments about a more general need for education; explanations for recommender systems are not sufficient alone [159]. Educating citizens should start already from the state education according to some [160]. Finally, some business representatives argued that content recommendation decisions are so complex that these would be difficult to explain to users [161]. Given the intrusive but obscure global tracking infrastructure that the companies have built over time, it is no wonder that they themselves feel incapable of explaining their recommendations and advertisement choices.

The fourth most common proposal for solving the issues was simple: all recommender systems should be simply prohibited. This drastic measure was promoted by even a surprisingly large amount of EU citizens. “Punish the algorithm!”. [162] Ban recommender systems [163] Such short but decisive comments reflect the critical attitude many people in Europe have toward recommender systems. Though, some would be willing to have some concessions; some would only allow recommender systems for goods and services [164] Analogously, according to many respondents, particularly recommending political content should be prohibited together with medical advertising [165]. In addition to the many societal threats and individual harms, privacy and data protection were often the rationale behind these prohibition proposals. “Fundamental ban” because the systems “cannot be legally represented in compliance with the applicable requirements for the protection of personal data”. [166] In this regard, some urged that the GDPR’s Article 58(2) should be immediately invoked to impose a ban [167]. In addition to these prohibition calls, a large amount of citizens argued that the problems would be easily solved if only the GDPR would be strictly enforced together with the e-privacy Directive 2002/58/EC, which regulates Web cookies, among other things. This alleged failure of the previous regulations may have contributed to the critical viewpoints and skepticism expressed by some.

Regarding the GDPR, there was also some confusion in the answers about whether the statistical correlations and inferred data used in recommender systems represent personal data. According to some, these are not personal data and thus enforcement via the GDPR cannot be done [168]. However, some others were sure that behavioral and inferred data fall under the GDPR [169]. Given the GDPR’s wording about personal data as any information relating to an identified or identifiable person, existing interpretations including those related to training data (Veale, et al., 2018), and the fact that personalized recommendations and advertisements explicitly target individual data subjects based on their personal data, the verdict is on the side of the latter arguments. In other words, the GDPR clearly applies.

These comments aligned with answers that emphasized human rights, fundamental rights in the EU, and European values. Algorithms should generally “stay within the boundaries of the Charter of Fundamental Rights”. [170] They should also promote cultural diversity that “is one of the pillars of the European Union’s founding texts”. [171] Regarding such diversity, recommender systems should particularly promote media pluralism in Europe, public broadcasting, and European content in video streaming services [172] In general, these systems should obey data protection, rule of law, justice, proportionality, and humanistic values [173]. To ensure compliance with existing laws and fundamental rights, strict enforcement and harsh financial penalties were recommended [174]. Any financial incentives behind recommender systems should be disclosed or removed [175]. Until platforms stop causing harms, their profits should be depleted [176]. Platforms should also pay their taxes [177]. The overall skepticism is again highly visible.

Human supervision of recommender systems was also a popular proposal. Despite a growing interest also in academic research, many questions are still unclear about human oversight for algorithms, including recommender systems (Andersen and Maalej, 2023; Lai, et al., 2023). When considering the size of the Big Tech platforms, it also remains unclear how such human oversight would work in practice particularly regarding harmful content and content removals.

Examining Table 3, numerous other proposals were also presented in the open consultation. Of these, liability deserves a mention already because on the other side of the Atlantic, the debate has largely been about the Communications Decency Act of 1996 and its shielding provisions for platforms from legal liability regarding content posted by users of the platforms (Epstein, 2020; Pagano, 2018). In this regard, the European opinions differed to some extent. A larger group of respondents argued that platforms should not have a get-out-of-jail card on content; they should be treated as publishers [178]. By implication, they should be subject to legal liability over content [179]. In particular, some argued that platforms should not benefit from the liability exemption specified in Article 14 of the e-commerce directive [180]. Disinformation and hate speech were seen as a specific type of content to which liability should particularly apply [181]. According to a minority, however, platforms should not be held liable for algorithmic flaws in recommender systems because these only incur a low risk to users [182]. Therefore, “companies should benefit from broad immunity from liability for the recommendations or suggestions made by their algorithms”. [183] Many further points were also raised regarding the potentially harmful consequences of the DSA for businesses.

4.6. Concerns

Business representatives responding to the consultation raised various distinct concerns, which more or less conflicted with those expressed by citizens. A few brief points are warranted about these concerns, which are summarized in Table 4. To begin with, the usual neoliberal or libertarian viewpoint is visible in the dataset; “no regulation is needed”. [184] Algorithms should remain free from any interference by governments, political parties, and non-governmental organizations [185]. These viewpoints were accompanied with concerns about competitiveness of smaller European companies; “recommender systems are crucial for European scaleups to grow and compete”. [186]

 

Table 4: Benefits, non-issues, and concerns.
CategoryFrequency
Protection of trade secrets20
Already addressed via the P2B regulation18
Usefulness to users14
Bad actors can exploit transparency13
Right to appeal on content decisions6
Already addressed via the GDPR6
Already addressed via the omnibus directive5
Transparency fosters trust4
Freedom of expression must be ensured4
Risks of hacking3
Barriers to innovation2
Editorial freedom must be ensured2
Platforms promote freedom of expression2
Liability threats2
Bureaucracy and costs1

 

Besides the antagonism toward all regulative action, the main concern of businesses was about trade secrets that transparency requirements for algorithms might reveal, as has been pointed out also by scholars (Turillazzi, et al., 2023). Analogously to arguments of many academics [187], concerns were also expressed that algorithmic transparency might expose systems to hacking and that bad actors could exploit transparency to manipulate recommender systems. Click and stream farms were used as examples about such manipulation [188]. To this end, Big Tech companies argued that even a little amount of transparency endangers their trade secrets as well as the security and integrity of their platforms and infrastructures, potentially causing more harm than help to consumers and citizens [189]. Interestingly, these critical points did not explicitly address the most common (see Table 3) proposal for a solution; the possibility to opt-out or opt-in only by choice.

As for trade secrets, a common point was that transparency was already addressed in the so-called omnibus Directive 2019/2161 and particularly the P2B regulation, which also provides legal guards against unwarranted disclosure of technical details that might be used to manipulate ranking algorithms and automated filtering decisions [190]. As was noted earlier, however, not all businesses agreed with this claim made about P2B regulation. A related point raised was that the GDPR already supposedly addressed some concerns [191]. In particular, the regulation’s Article 22 for the opt-out possibility from automated decision-making and profiling was seen as sufficient for avoiding further regulations [192]. However, like some academics (de Hert and Lazcoz, 2021), critics noted that the Article’s wording about legal effects or other similarly significantly ramifications prevent citizens a possibility to opt-out in the context of recommender systems [193].

Finally, various other concerns were further raised but to a lesser extent in terms of volume. For instance, a concern was raised about mandating the use of explainable algorithms, which, according to a respondent’s viewpoint, prevent the use of more advanced algorithms [194]. Other concerns included those related to automated content filters for recommender systems, which were supported by some citizens [195]. According to critics, no automated ex ante controls for content should be forced upon companies [196]. To this end, some noted that only ex post enforcement should be considered as ex ante measures are useless [197]. In a similar vein, a concern was also raised that authorities might be able to dictate what type of content social media companies should recommend, which would be particularly problematic in those European countries with non-independent authorities and weak rule of law provisions [198]. In other words, the freedom of expression must be guaranteed. Although the relation between content moderation and editorial freedom was a more pressing issue (Papaevangelou, 2023), some media representatives were concerned also about potential effects upon the editorial freedom to choose the rankings for media content [199].

4.7. The DSA’s answer

It is necessary to take a brief final look at what the actual DSA says about recommender systems and imposes upon them. Recitals 55, 70, 84, 88, 94, 96, and 102 set the overall scene. The discussion in these recitals include ranking demotion and shadow banning, suspension of monetization and advertising for bad actors, transparency, risk assessments for countering systematic infringement, testing of recommender algorithms, bias mitigation and data protection measures, particularly with respect to vulnerable groups and the GDPR’s category of sensitive personal data, availability of data for auditors, and standardization. However, the actual regulatory mandates for recommender systems are weak and limited in their scope. Only three such mandates are imposed.

First, Article 27 specifies transparency requirements for recommender systems. These are simple enough: the main parameters used in the systems should be specified in plain and intelligible language. The information about the main parameters should include explanations about the most significant ranking criteria and the reasons for the relevance of the parameters. The information provided should cover also any potential options provided for users for altering the parameters. Thus, these transparency requirements resemble those specified in Article 5 of the P2B regulation. Overall, these are weak and easily subjected to criticism. As has been already pointed out (Botero Arcila and Griffin, 2023; Busch, 2023; Helberger, et al., 2021), specifying few vague sentences about main parameters in terms and services hardly quali es as transparency that would educate and empower people.

Second, Article 34 mandates very large online platforms (VLOPs) and very large online search engines (VLOSEs) to carry out risk assessments. These cover also recommender systems. Such assessments should address questions such as the manipulation potential of the systems and their role in the amplification of illegal content. Article 35 continues with risk mitigation, which includes testing of recommender systems and their algorithms. Then, Article 40 further mandates that VLOPs and VLOSEs should disclose the design, logic, functioning, and testing of their recommender systems to competent regulatory coordinators or the European Commission in particular. According to Article 44, the Commission is also set to develop voluntary European standards, including those related to choice interfaces and information on the main parameters. All in all, the common proposal in the consultation about auditing was taken into account in the DSA.

Third, Article 38 mandates VLOPs and VLOSEs to provide at least one ranking criterion that is not based on profiling, as de ned in the GDPR’s Article 4. In other words, Big Tech companies should provide at least one easily accessible option that goes beyond personalization. Hence, the most common proposal of EU citizens for an opt-out choice was to some extent taken into account, although the stronger opt-in version of the proposal was bypassed by the lawmakers. As was noted in Section 4.5, both choices can be seen to reflect the European concept of informational self-determination, and, therefore, these were presumably relatively easy to lobby. It is finally worth remarking that Article 38 has also been enforced in a recent court case (Kupiec, 2023), but it remains to be seen whether the article’s intervention will change the recommender systems landscape more generally.

 

++++++++++

5. Conclusion

This paper presented a qualitative analysis of the perceptions on recommender systems by European citizens, civil society groups, public authorities, businesses, and others. The dataset examined was based on the answers submitted to the DSA’s open consultation in 2020. The following eight points summarize the qualitative results obtained:

The central tenet|the skepticism and critical attitudes hypothesized, is well-represented and visible in the dataset. Given the close connection of many recommender systems to the online advertising business, some of the skepticism is understandable, especially given the enduring problems in the GDPR’s enforcement (Ruohonen, 2023a; Ruohonen and Hjerppe, 2022). It remains to be seen whether the many problems can be even satisfactorily addressed without addressing the many privacy violations too. Having said that, the individual harms and societal threats are generally more difficult to evaluate, although it is clear that these did not explicitly enter into the DSA’s recommender system scope. Furthermore, some of the proposals, such as the promotion of media and journalism, are perhaps better addressed in other regulations, such as the forthcoming European Media Freedom Act (EMFA). Then, particularly the calls to ban all recommender systems are close to neo-luddism. Even when such calls are bypassed, it can be argued that the DSA left many questions unanswered with respect to recommender systems.

Some limitations should be also acknowledged. Although qualitative analysis itself is often open to criticism about subjectivity and researcher bias, a more important concern relates to the dataset used; it may be biased. As the dataset is about responses to a policy consultation for a regulative initiative, it may well be that the responses are biased toward those who are particularly interested in European politics and the EU’s policy-making for technology. This potential bias applies to citizens, civil society groups, academics, and businesses alike, each of which may have their own biases due to their different political interests and policy goals. By implication, the dataset cannot be considered as representative to describe the perceptions and attitudes of Europeans as a whole. This non-representativeness may also explain a portion of the overall skepticism. While a comprehensive European survey would be required to patch the representativeness issue, an alternative promising path for further research would be to classify the answers according to the types of respondents. It would allow to better examine the heavy lobbying, among other things.

Also other biases may be present, including those related to socio-economic factors. For instance, younger people have typically less concerns about privacy and commercial surveillance (Kalmus, et al., 2022). A further potential source of biases stems from the consultation’s broad scope, which may have led the respondents to consider mainly the recommender systems of Big Tech companies, hence excluding or downplaying considerations and opinions about smaller recommender systems used in European online marketplaces, media, and other domains.

As for further research, the noted biases translate to an important question about the EU’s technology-related policy-making. Little is known about the politics surrounding recommender systems and artificial intelligence in general. Who is promoting and lobbying what and why? Regarding such important questions, contributions from political science, international relations, and associated policy sciences are generally scarce (Ruohonen, 2023b; Srivastava, 2023). Despite the usual prodigious lobbying (Bendiek and Stuerzer, 2023), many observers maintain that actual political struggles were still only modest during the DSA’s surprisingly fast negotiations (Papaevangelou, 2023; Schlag, 2023). Therefore, a plausible hypothesis can be also presented for further research: the lobbying from Big Tech and other companies was successful in limiting the regulatory scope of recommender systems to only a few relatively weak mandates.

There is also some room for policy-making criticism about the EU’s recent regulative efforts. It seems that the EU is eager to pursue many new regulations, while, at the same time, the enforcement of the existing ones, including the GDPR in particular, is facing many problems. A similar concern remains about the DSA’s future enforcement. Having learned from the GDPR’s enforcement problems, lawmakers largely centralized the administration and enforcement of the DSA at the EU level and further required funding for the enforcement from Big Tech and other companies.

Despite these measures, critics have already questioned whether meaningful accountability will be delivered in practice due to potential enforcement obstacles, coordination problems between national public authorities, procedural issues, problems in auditing, incoherence in terms of national adaptations, concerns over the freedom of expression, partial outsourcing to private sector, and other problems (Barata and Calvet-Bademunt, 2023; Bhatia and Allen, 2023; Cauffman and Goanta, 2021; Jackson and Malaret, 2023; Riedenstein and Echikson, 2023; Strowel and De Meyere, 2023; Turillazzi, et al., 2023; van Cleynenbreugel, 2023; van Hoboken, 2022). Against this backdrop, all the craze around artificial intelligence should not overshadow the foundational fact: transparency and accountability requirements apply also to policy-makers, regulators, and public administrations. On the bright side, however, many of the DSA’s fundamental goals, including those for recommender systems, were recently more or less adopted also by UNESCO (2023), indicating that the so-called Brussels effect may perhaps apply also to this new type of a platform regulation. End of article

 

About the author

Jukka Ruohonen is an assistant professor in the SDU Center for Industrial Software at the University of Southern Denmark (Syddansk Universitet).
E-mail: juk [at] mmmi [dot] sdu [dot] dk

 

Acknowledgements

This research was funded by the Strategic Research Council at the Academy of Finland (grant number 327391).

 

Notes

1. 543; cf., also 2006.

2. 949; 1032; 1445; 2053; 2243; 2348.

3. 2066; 2320; 2421; 2852.

4. 2150; 2529.

5. 2201; 2284; 2348.

6. 889.

7. 2359; 2439.

8. 723; 741; 1186; 1399; 2030; 2192.

9. 478.

10. 1277; 1720; 2362; 2474.

11. 2813; 2826.

12. 2798.

13. 2534.

14. 2336.

15. 2342; 2362.

16. 2078.

17. 2457.

18. 1094; 2306.

19. 2139.

20. 2846.

21. 1543.

22. 1542.

23. 2296.

24. 662; 950; 956; 1031; 1099; 1147; 1578.

25. 1475; 1712.

26. 1595.

27. 2443.

28. 1405; 2758.

29. 2189; 2620.

30. 689; cf., also 2146.

31. 564.

32. 785; 964; 1327; 1503; 1584; 1670; 1768; 1830; 1839; 1988; 1996; 2009; 2218; 2395; 2409; 2622; 2769; 2782; 2792.

33. 2531.

34. 859; 1091.

35. 262; cf., also 1182.

36. 2638.

37. 1074; 1126; 1201; 1208; 2414; 2630.

38. 55; cf., also 840; 1190.

39. 826.

40. 203; 740; 907; 1722; 1141; 2238.

41. 2379.

42. 266; 983; 1001; 1020; 1403; 1419; 2097.

43. 1063.

44. 1288; 1419.

45. 1346; 2454; 2524; 2807.

46. 11; cf., also 1523; 2421.

47. 2080.

48. 2225.

49. 739; 1247; 2353.

50. 1860; 2749.

51. 721.

52. 1095.

53. 1207.

54. 1509; 1535; 1699.

55. 1703, 1727; 1961.

56. 2250.

57. 1235; 1712.

58. 2711; 2737.

59. 2282.

60. 2834.

61. 2410.

62. 319; also 685; 893; 1120; 1810; 2088; 2345; 2350; 2353; 2561; 2685; 2778; 2784; 2793; 2794.

63. 628.

64. 856; 1110; 1241; 1271; 1347; 2276; 2552; 2675.

65. 1246.

66. 617.

67. 945.

68. 121; cf., also 1353.

69. 2551.

70. 666; 683; 750; 891; 1088; 1190; 1283; 1287; 1373; 1407; 1553; 1807; 1884; 2096; 2146; 2160; 2165; 2525; 2579; 2596; 2666; 2794; 2810.

71. 675; 748; 1207; 1482; 1910; 1950; 2058; 2086; 2139; 2143; 2154; 2487; 2491; 2685; 2741; 2778; 2789; 2818; 2847.

72. 1801; 1972; 2408; 2682.

73. 1691; 2008.

74. 949.

75. 1911.

76. 2175.

77. 2631.

78. 61.

79. 369; 556; 679; 1020; 1401; 1419.

80. 527.

81. 623; 2121; cf., also 2475.

82. 914; 1322; cf., also 2535.

83. 2177; 1399.

84. 1491.

85. 1403.

86. 58.

87. 60.

88. 925.

89. 1848; 2765.

90. 1071; 1090; 1492; 1518; 2243; 2461; 2746.

91. 1127; 1163; 1835; 1968; 2037; 2053; 2139; 2192; 2201; 2327; 2368; 2597; 2706; 2860; 2861.

92. 187.

93. 2798.

94. 635.

95. 870.

96. 2814.

97. 972.

98. 1105; 1140; 1205; 1252; 1275; 1529.

99. 1424; 2672.

100. 1238; 1326; 2148; 2581; 2677; 2796.

101. 1166.

102. 2512.

103. 1231.

104. 2297.

105. 1288.

106. 2356.

107. 2476.

108. 2495.

109. 2206.

110. 74; 1082; 1398; 1758; 1769; 2216, inter alia.

111. 943; 1320; 1426; 1567; 1832; 2800.

112. 909.

113. 241; 385; 1288; 2034; 2306; 2568; 2703.

114. 129; 241; 484; 2082; 2381; 2407; 2503; 2688; 2835.

115. 287; 494.

116. 104.

117. 318.

118. 1213; 2516.

119. 2407; 2508.

120. 478.

121. 296; 2487.

122. 2381.

123. 175; 2597.

124. 2610.

125. 1675.

126. 64.

127. 80; 186; 679; 1029.

128. 373; cf., also 596; 679.

129. 2827.

130. 514; 659; 844; 1025; 1213.

131. 318; 326.

132. 1213.

133. 2826.

134. 895; 2758.

135. 318.

136. 332.

137. 102.

138. 208.

139. 859; 2758.

140. 55, 1403.

141. 521; 1419.

142. 820.

143. 2154; 2776.

144. 655.

145. 691; 2459.

146. 596; 1598.

147. 697; 2411.

148. 2583.

149. 589; cf., also 1189; 1198; 1234; 1253; 1293; 1308; 1342; 2219; 2272; 2347.

150. 656.

151. 1918; cf., also 388.

152. 475; 679; 2842.

153. 578.

154. 194.

155. 464.

156. 2842.

157. 493.

158. 539.

159. 1412.

160. 1162.

161. 2777.

162. 648.

163. 502; 623; 2232; 2526; 2672.

164. 1390.

165. 662; 1020; 2148; 2319.

166. 549.

167. 373.

168. 266.

169. 1403; 1461.

170. 68; also 686; 1403; 146.

171. 205; also 218; 97.

172. 129; 205; 318; 359; 427; 501; 974.

173. 1149; cf., also 1158.

174. 255; 475; 653; 697.

175. 2831; 2173.

176. 2206.

177. 2421; 2811.

178. 2113; 2206; 2321; 2395.

179. 339; 427; 2375; cf., also 807.

180. 125.

181. 1119; 2321.

182. 353.

183. 359.

184. 532; also 724; 997; 2823.

185. 1825; 1852.

186. 323.

187. Epstein, 2020; Gowder, 2023; Laufer and Nissenbaum, 2023, p. 33.

188. 133.

189. 1598.

190. 147; 277; 301; 308; 464; 997; 1001.

191. 613.

192. 178; 201; 277; 308.

193. 266; 895; 2758.

194. 464.

195. 2150.

196. 951; 1061.

197. 493.

198. 359.

199. 244.

 

References

R. Adams, 2019. “Social media urged to take ‘moment to reflect’ after girl’s death,” Guardian (29 January), at https://www.theguardian.com/media/2019/jan/30/social-media-urged-to-take-moment-to-reflect-after-girls-death, accessed 15 May 2024.

J.S. Andersen and W. Maalej, 2023. “Design patterns for machine learning based systems with human-in-the-loop,” arXiv:2312.00582 (1 December).
doi: https://doi.org/10.48550/arXiv.2312.00582, accessed 15 May 2024.

A. Andreou, G. Venkatadri, O. Goga, K.P. Gummadi, P. Loiseau, and A. Mislove, 2018. “Investigating ad transparency mechanisms in social media: A case study of Facebook’s explanations,” Proceedings of the Network and Distributed System Security Symposium (NDSS 2018), pp. 1–15, and at https://hal.science/hal-01955309, accessed 15 May 2024.

T. Apramian, S. Cristancho, C. Watling, and L. Lingard, 2017. “(Re)grounding grounded theory: A close reading of theory in four schools,” Qualitative Research, volume 17, number 4, pp. 359–376.
doi: https://doi.org/10.1177/1468794116672914, accessed 15 May 2024.

C. Armitage, N. Botton, L. Dejeu-Castang, and L. Lemoine, 2023. Study on the impact of recent developments in digital advertising on privacy, publishers and advertisers. Bruxelles: European Commission, Directorate-General for Communications Networks, Content and Technology, at https://op.europa.eu/en/publication-detail/-/publication/8b950a43-a141-11ed-b508-01aa75ed71a1/, accessed 15 May 2024.

B. Aysolmaza, R. Müller, and D. Meacham, 2023. “The public perceptions of algorithmic decision-making systems: Results from a large-scale survey,” Telematics and Informatics, volume 79, 101954.
doi: https://doi.org/10.1016/j.tele.2023.101954, accessed 15 May 2024.

C. Ballard, 2023. “Perhaps YouTube fixed its algorithm. It did not fix its extremism problem,” Tech Policy Press (9 November), at https://techpolicy.press/perhaps-youtube-fixed-its-algorithm-it-did-not-fix-its-extremism-problem/, accessed 15 May 2024.

S. Banaji and R. Bhat, 2022. Social media and hate. New York: Routledge.
doi: https://doi.org/10.4324/9781003083078, accessed 15 May 2024.

J. Barata and J. Calvet-Bademunt, 2023. “The European Commission’s approach to DSA systemic risk is concerning for freedom of expression,” Tech Policy Press (30 October), at https://www.techpolicy.press/the-european-commissions-approach-to-dsa-systemic-risk-is-concerning-for-freedom-of-expression/, accessed 15 May 2024.

S.B. Barnes, 2006. “A privacy paradox: Social networking in the United States,” First Monday, volume 11, number 9.
doi: https://doi.org/10.5210/fm.v11i9.1394, accessed 15 May 2024.

P. Bekos, P. Papadopoulos, E.P. Markatos, and N. Kourtellis, 2023. “The hitchhiker’s guide to Facebook Web tracking with invisible pixels and click IDs,” WWW ’23: Proceedings of the ACM Web Conference 2023, pp. 2,132–2,143.
doi: https://doi.org/10.1145/3543507.3583311, accessed 15 May 2024.

A. Bendiek and I. Stuerzer, 2023. The Brussels Effect, European regulatory power and political capital: Evidence for mutually reinforcing internal and external dimensions of the Brussels Effect from the European digital policy debate, Digital Society, volume 2, article number 5.
doi: https://doi.org/10.1007/s44206-022-00031-1, accessed 15 May 2024.

M.A. Bermejo-Agueda, P. Callejo, R. Cuevas, and A. Cuevas, 2023. “adF: A novel system for measuring Web fingerprinting through ads,” arXiv:2311.08769 (15 November).
doi: https://doi.org/10.48550/arXiv.2311.08769, accessed 15 May 2024.

A. Bhatia and A. Allen, 2023. “Auditing in the dark: Guidance is needed to ensure maximum impact of DSA algorithmic audits,” Center for Democracy & Technology (20 November), at https://cdt.org/insights/auditing-in-the-dark-guidance-is-needed-to-ensure-maximum-impact-of-dsa-algorithmic-audits/, accessed 15 May 2024.

E. Bietti, 2020. “From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 210–219.
doi: https://doi.org/10.1145/3351095.3372860, accessed 15 May 2024.

B. Botero Arcila and R. Griffin, 2023. Social media platforms and challenges for democracy, rule of law and fundamental rights. Bruxelles: European Parliament.
doi: https://doi.org/10.2861/672578, accessed 15 May 2024.

C. Busch, 2023. “From algorithmic transparency to algorithmic choice: European perspectives on recommender Systems and platform regulation,” In: S. Genovesi, K. Kaesling, and S. Robbins (editors). Recommender systems: Legal and ethical issues. Cham, Switzerland: Springer, pp. 31–54.
doi: https://doi.org/10.1007/978-3-031-34804-4_3, accessed 15 May 2024.

M. Buyl and T. De Bie, 2024. “Inherent limitations of AI fairness,” Communications of the ACM, volume 67, number 2, pp. 48–55.
doi: https://doi.org/10.1145/3624700, accessed 15 May 2024.

S.A. Castaño-Pulgarín, N. Suárez-Betancur, L.M.T. Vega, and H.M.H. López, 2021. “Internet, social media and online hate speech. Systematic review,” Aggression and Violent Behavior, volume 58, 101608.
doi: https://doi.org/10.1016/j.avb.2021.101608, accessed 15 May 2024.

C. Cauffman and C. Goanta, 2021. “A new order: The Digital Services Act and consumer protection,” European Journal of Risk Regulation, volume 12, number 4, pp. 758–774.
doi: https://doi.org/10.1017/err.2021.8, accessed 15 May 2024.

B. Chomanski, 2021. “The missing ingredient in the case for regulating big tech,” Minds and Machines, volume 31, number 2, pp. 257–275.
doi: https://doi.org/10.1007/s11023-021-09562-x, accessed 15 May 2024.

P. Cremonesi, F. Garzotto, and R. Turrin, 2012. “Investigating the persuasion potential of recommender systems from a quality perspective: An empirical study,” ACM Transactions on Interactive Intelligent Systems, volume 2, number 2, article number 11, pp. 1–41.
doi: https://doi.org/10.1145/2209310.2209314, accessed 15 May 2024.

M. Crowe, M. Inder, and R. Porter, 2015. “Conducting qualitative research in mental health: Thematic and content analyses,” Australian & New Zealand Journal of Psychiatry, volume 49, number 7, pp. 616–623.
doi: https://doi.org/10.1177/0004867415582053, accessed 15 May 2024.

P. de Hert and G. Lazcoz, 2021. Radical rewriting of Article 22 GDPR on machine decisions in the AI era, European Law Blog (13 October), at https://europeanlawblog.eu/2021/10/13/radical-rewriting-of-article-22-gdpr-on-machine-decisions-in-the-ai-era/, accessed 15 May 2024.

J.S. Edu, J.M. Such, and G. Suarez-Tangil, 2020. “Smart home personal assistants: A security and privacy review,” ACM Computing Surveys, volume 53, number 6, article number 116, pp. 1–36.
doi: https://doi.org/10.1145/3412383, accessed 15 May 2024.

B. Epstein, 2020. “Why it is so difficult to regulate disinformation online,” In: W.L. Bennett and S. Livingston (editors). The disinformation age: Politics, technology, and disruptive communication in the United States. Cambridge: Cambridge University Press, pp. 190–210.
doi: https://doi.org/10.1017/9781108914628.008, accessed 15 May 2024.

M. Eslami, K. Karahalios, C. Sandvig, K. Vaccaro, A. Rickman, K. Hamilton, and A. Kirlik, 2016. “First I ‘like’ it, then I hide it: Folk theories on social feeds,” CHI ’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2371–2382.
doi: https://doi.org/10.1145/2858036.2858494, accessed 15 May 2024.

European Commission, Directorate-General for Communications Networks, Content and Technology, 2023. “Digital Services Act: Application of the risk management framework to Russian disinformation campaigns,” at https://data.europa.eu/doi/10.2759/764631, accessed 15 May 2024.

European Commission, Directorate-General for Migration and Home Affairs, 2023. “EU Internet Forum — Study on the role and effects of the use of algorithmic amplification to spread terrorist, violent extremist and borderline content: Final report,” at https://data.europa.eu/doi/10.2837/259157, accessed 15 May 2024.

European Commission, 2020. “Digital Services Act — Deepening the internal market and clarifying responsibilities for digital services,” at https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12417-Digital-Services-Act-deepening-the-Internal-Market-and-clarifying-responsibilities-for-digital-services/public-consultation_en, accessed 15 May 2024.

European Fact-Checking Standard Network (EFCSN), 2024. “Fact-checking and related risk-mitigation measures for disinformation in the very large online platforms and search engines: A systematic review of the implementation of big tech commitments to the EU Code of Practice on Disinformation,” at https://efcsn.com/wp-content/uploads/2024/03/EFCSN-–-Fact-checking-and-related-Risk-Mitigation-Measures-for-Disinformation-in-the-Very-Large-Online-Platforms.pdf, accessed 15 May 2024.

U. Flick, 2004. “Triangulation in qualitative research,” In: U. Flick, E. von Kardorff, and I. Steinke (editors). A companion to qualitative research. London: Sage, pp. 178–183.

F. Gedikli, D. Jannach, and M. Ge, 2014. “How should I explain? A comparison of different explanation types for recommender systems,” International Journal of Human-Computer Studies, volume 72, number 4, pp. 367–382.
doi: https://doi.org/10.1016/j.ijhcs.2013.12.007, accessed 15 May 2024.

D. Geissler, A. Maarouf, and S. Feuerriegel, 2023. “Causal understanding of why users share hate speech on social media,” arXiv:2310.15772 (24 October).
doi: https://doi.org/10.48550/arXiv.2310.15772, accessed 15 May 2024.

P. Gowder, 2023. The networked Leviathan: For democratic platforms. Cambridge: Cambridge University Press.
doi: https://doi.org/10.1017/9781108975438, accessed 15 May 2024.

A.M. Guess, N. Malhotra, J. Pan, P. Barberá, H. Allcott, T. Brown, A. Crespo-Tenorio, D. Dimmery, D. Freelon, M. Gentzkow, S. González-Bailón, E. Kennedy, Y.M. Kim, D. Lazer, D. Moehler, B. Nyhan, C.V. Rivera, J. Settle, D.R. Thomas, E. Thorson, R. Tromble, A. Wilkins, M. Wojcieszak, B. Xiong, C.K. de Jonge, A. Franco, W. Mason, N.J. Stroud, J.A. Tucker, 2023. “How do social media feed algorithms affect attitudes and behavior in an election campaign?” Science, volume 381, number 6656 (28 July), pp. 398–404.
doi: https://doi.org/10.1126/science.abp9364, accessed 15 May 2024.

G. Hassan, S. Brouillette-Alarie, S. Alava, D. Frau-Meigs, L. Lavoie, A. Fetiu, W. Varela, E. Borokhovski, V. Venkatesh, C. Rousseau, and S. Sieckelinck, 2018. “Exposure to extremist online content could lead to violent radicalization: A systematic review of empirical evidence,” International Journal of Developmental Science, volume 12, numbers 1–2, pp. 71–88.
doi: https://doi.org/10.3233/DEV-170233, accessed 15 May 2024.

H. Heath and S. Cowley, 2004. “Developing a grounded theory approach: A comparison of Glaser and Strauss,” Nursing Studies, volume 41, number 2, pp. 141–150.
doi: https://doi.org/10.1016/s0020-7489(03)00113-5, accessed 15 May 2024.

N. Helberger, M. van Drunen, S. Vrijenhoek, and J. Möller, 2021. Regulation of news recommenders in the Digital Services Act: Empowering David against the Very Large Online Goliath, Internet Policy Review (26 Feruary), at https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large, accessed 15 May 2024.

A.P. Heldt, 2022. “EU Digital Services Act: The white hope of intermediary regulation,” In: T. Flew and F.R. Martin (editors). Digital platform regulation: Global perspectives on Internet governance. Cham, Switzerland: Palgrave Macmillan, pp. 69–84.
doi: https://doi.org/10.1007/978-3-030-95220-4_4, accessed 15 May 2024.

M. Hildebrandt, 2020. Law for computer scientists and other folk. Oxford: Oxford University Press.
doi: https://doi.org/10.1093/oso/9780198860877.001.0001, accessed 15 May 2024.

M. Hildebrandt, 2009. “Who is profiling who? Invisible visibility,” In: S. Gutwirth, Y. Poullet, P.D. Hert, C. de Terwangne, and S. Nouwt (editors). Reinventing data protection? Dordrecht: Springer, pp. 239–252.
doi: https://doi.org/10.1007/978-1-4020-9498-9_14, accessed 15 May 2024.

K. Hjerppe, J. Ruohonen, and V. Leppänen, 2023. “Extracting LPL privacy policy purposes from annotated Web service source code,” Software and Systems Modeling, volume 22, number 1, pp. 331–349.
doi: https://doi.org/10.1007/s10270-022-00998-y, accessed 15 May 2024.

H.-F. Hsieh and S.E. Shannon, 2005. “Three approaches to qualitative content analysis,” Qualitative Health Research, volume 15, number 9, pp. 1,277–1,288.
doi: https://doi.org/10.1177/1049732305276687, accessed 15 May 2024.

T. Humberstone, 2023. “I’m a luddite (and so can you!),” The Nib (17 July), at https://thenib.com/im-a-luddite/, accessed 15 May 2024.

L. Iandoli, S. Primario, and G. Zollo, 2021. “The impact of group polarization on the quality of online debate in social media: A systematic literature review,” Technological Forecasting and Social Change, volume 170, 120924.
doi: https://doi.org/10.1016/j.techfore.2021.120924, accessed 15 May 2024.

Irish Council for Civil Liberties (ICCL), 2023. “The European Commission must follow Ireland's lead, and switch off big tech's toxic algorithms,” at https://www.iccl.ie/2023/the-european-commission-must-follow-irelands-lead-and-switch-o -big-techs-toxic-algorithms/, accessed 15 May 2024.

U. Iqbal, P.N. Bahrami, R. Trimananda, H. Cui, A. Gamero-Garrido, D.J. Dubois, D. Choffnes, A. Markopoulou, F. Roesner, and Z. Shafiq, 2023. “Tracking, profiling, and ad targeting in the Alexa Echo smart speaker ecosystem,” IMC ’23: Proceedings of the 2023 ACM on Internet Measurement Conference, pp. 569–583.
doi: https://doi.org/10.1145/3618257.3624803, accessed 15 May 2024.

R. Jackson and J. Malaret, 2023. “As Israel and Hamas go to war, the Digital Services Act faces its first major test,” Digital Forensic Research Lab (DFRLab), Atlantic Council (26 October), at https://dfrlab.org/2023/10/26/as-israel-and-hamas-go-to-war-the-digital-services-act-faces-its-first-major-test/, accessed 15 May 2024.

V. Kalmus, G. Bolin, and R. Figueiras, 2022. “Who is afraid of dataveillance? Attitudes toward online surveillance in a cross-cultural and generational perspective,” New Media & Society (22 November).
doi: https://doi.org/10.1177/14614448221134493, accessed 15 May 2024.

S. Kapoor and A. Narayanan, 2023. “How to prepare for the deluge of generative AI on social media: A grounded analysis of the challenges and opportunities,” Knight First Amendment Institute, Columbia University (16 June), at https://knightcolumbia.org/content/how-to-prepare-for-the-deluge-of-generative-ai-on-social-media, accessed 15 May 2024.

K. Kieslich, B. Keller, and C. Starke, 2022. “Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence,” Big Data & Society (10 May).
doi: https://doi.org/10.1177/20539517221092956, accessed 15 May 2024.

K. Kieslich, M. Lünich, and F. Marcinkowski, 2021. “The Threats of Artificial Intelligence Scale (TAI). Development, measurement and test over three application domains,” International Journal of Social Robotics, volume 13, number 7, pp. 1,563–1,577.
doi: https://doi.org/10.1007/s12369-020-00734-w, accessed 15 May 2024.

B.P. Knijnenburg, M.C. Willemsen, and A. Kobsa, 2011. “A pragmatic procedure to support the user-centric evaluation of recommender systems,” RecSys ’11: Proceedings of the Fifth ACM Conference on Recommender Systems, pp. 321–324.
doi: https://doi.org/10.1145/2043932.2043993, accessed 15 May 2024.

E. Kubin and C. von Sikorski, 2021. “The role of (social) media in political polarization: A systematic review,” Annals of the International Communication Association, volume 45 number 3, pp. 188–206.
doi: https://doi.org/10.1080/23808985.2021.1976070, accessed 15 May 2024.

M. Kupiec, 2023. “Amazon’s interim relief to suspend obligations on online advertising transparency under the DSA: One swallow doesn’t make a summer,” Kluwer Competition Law Blog (6 November), at https://competitionlawblog.kluwercompetitionlaw.com/2023/11/06/amazons-interim-relief-to-suspend-obligations-on-online-advertising-transparency-under-the-dsa-one-swallow-doesnt-make-a-summer/, accessed 15 May 2024.

V. Lai, C. Chen, A. Smith-Renner, Q.V. Liao, and C. Tan, 2023. “Towards a science of human-AI decision making: An overview of design space in empirical human-subject studies,” FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 1,369–1,385.
doi: https://doi.org/10.1145/3593013.3594087, accessed 15 May 2024.

P. Laperdrix, N. Bielova, B. Baudry, and G. Avoine, 2020. “Browser fingerprinting: A survey,” ACM Transactions on the Web, volume 14, number 2, article number 8, pp. 1–33.
doi: https://doi.org/10.1145/3386040, accessed 15 May 2024.

B. Laufer and H. Nissenbaum, 2023. “Algorithmic displacement of social trust,” Knight First Amendment Institute, Columbia University (29 November), at https://knightcolumbia.org/content/algorithmic-displacement-of-social-trust, accessed 15 May 2024.

M.K. Lee, 2018. “Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management,” Big Data & Society (8 March).
doi: https://doi.org/10.1177/2053951718756684, accessed 15 May 2024.

Z. Liu, U. Iqbal, and N. Saxena, 2022. “Opted out, yet tracked: Are regulations enough to protect your privacy?” arXiv:2202.00885 (2 February).
doi: https://doi.org/10.48550/arXiv.2202.00885, accessed 15 May 2024.

M. Martijn, C. Conati, and K. Verbert, 2022. “‘Knowing me, knowing you’: Personalized explanations for a music recommender system,” User Modeling and User-Adapted Interaction, volume 32, numbers 1–2, pp. 215–252.
doi: https://doi.org/10.1007/s11257-021-09304-9, accessed 15 May 2024.

D. Mcsherry, 2005. “Explanation in recommender systems,” Artificial Intelligence Review, volume 24, number 2, pp. 179–197.
doi: https://doi.org/10.1007/s10462-005-4612-x, accessed 15 May 2024.

S. Milano, M. Taddeo, and L. Floridi, 2020. “Recommender systems and their ethical challenges,” AI & Society, volume 35, pp. 957–967.
doi: https://doi.org/10.1007/s00146-020-00950-y, accessed 15 May 2024.

D.L. Morgan, 1993. “Qualitative content analysis: A guide to paths not taken,” Qualitative Health Research, volume 3, number 1, pp. 112–121.
doi: https://doi.org/10.1177/104973239300300107, accessed 15 May 2024.

N.A. Pagano, 2018. “The indecency of the Communications Decency Act § 230: Unjust immunity for monstrous social media platforms,” Pace Law Review, volume 39, number 1, pp. 511–538.
doi: https://doi.org/10.58948/2331-3528.1994, accessed 15 May 2024.

C. Papaevangelou, 2023. “‘The non-interference principle’: Debating online platforms’ treatment of editorial content in the European Union's Digital Services Act,” European Journal of Communication, volume 38, number 5, pp. 466–483.
doi: https://doi.org/10.1177/02673231231189036, accessed 15 May 2024.

N. Patel, 2023. “Harvard professor Lawrence Lessig on why AI and social media are causing a free speech crisis for the Internet,” The Verge (24 October), at https://www.theverge.com/23929233/lawrence-lessig-free-speech-first-amendment-ai-content-moderation-decoder-interview, accessed 15 May 2024.

P. Pu, L. Chen, and R. Hu, 2012. “Evaluating recommender systems from the user’s perspective: Survey of the state of the art,” User Modeling and User-Adapted Interaction, volume 22, numbers 4–5, pp. 317–355.
doi: https://doi.org/10.1007/s11257-011-9115-7, accessed 15 May 2024.

M.H. Ribeiro, R. Ottoni, R. West, V.A.F. Almeida, and W. Meira, 2020. “Auditing radicalization pathways on YouTube,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 131–141.
doi: https://doi.org/10.1145/3351095.3372879, accessed 15 May 2024.

C. Riedenstein and B. Echikson, 2023. “Middle East violence tests Europe’s new digital content law,” Center for European Policy Analysis (13 October), at https://cepa.org/article/mideast-violence-tests, accessed 15 May 2024.

J. Ruohonen, 2023a. “Note on the proposed law for improving the transparency of political advertising in the European Union,” arXiv:2303.02863 (6 March).
doi: https://doi.org/10.48550/arXiv.2303.02863, accessed 15 May 2024.

J. Ruohonen, 2023b. “A text mining analysis of data protection politics: The case of plenary sessions of the European Parliament,” arXiv:2302.09939 (20 February).
doi: https://doi.org/10.48550/arXiv.2302.09939, accessed 15 May 2024.

J. Ruohonen and K. Hjerppe, 2022. “The GDPR enforcement fines at glance,” Information Systems, volume 106, 101876.
doi: https://doi.org/10.1016/j.is.2021.101876, accessed 15 May 2024.

J. Ruohonen and V. Leppänen, 2018. “Invisible pixels are dead, long live invisible pixels!” WPES’18: Proceedings of the 2018 Workshop on Privacy in the Electronic Society, pp. 28–32.
doi: https://doi.org/10.1145/3267323.3268950, accessed 15 May 2024.

J. Ruohonen and V. Leppänen, 2017. “Whose hands are in the Finnish cookie jar?” Proceedings of the European Intelligence and Security Informatics Conference (EISIC 2017), pp. 127–130.
doi: https://doi.org/10.1109/EISIC.2017.25, accessed 15 May 2024.

J. Ruohonen, J. Salovaara, and V. Leppänen, 2018. “Crossing cross-domain paths in the current Web,” Proceedings of the 16th Annual Conference on Privacy, Security and Trust (PST 2018), pp. 1–5.
doi: https://doi.org/10.1109/PST.2018.8514163, accessed 15 May 2024.

H.S. Sætra, M. Coeckelbergh, and J. Danaher, 2022. “The AI ethicist’s dilemma: Fighting big tech by supporting big tech,” AI Ethics, volume 2, number 3, pp. 15–27.
doi: https://doi.org/10.1007/s43681-021-00123-7, accessed 15 May 2024.

G. Schlag, 2023. “European Union's regulating of social media: A discourse analysis of the Digital Services Act,” Politics and Governance, volume 11, number 3.
doi: https://doi.org/10.17645/pag.v11i3.6735, accessed 15 May 2024.

R. Shang, K.J.K. Feng, and S. Chirag, 2022. “Why am I not seeing it? Understanding users’ needs for counterfactual explanations in everyday recommendations,” FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency. pp. 1,330–1,340.
doi: https://doi.org/10.1145/3531146.3533189, accessed 15 May 2024.

D. Shin, A. Rasul, and A. Fotiadis, 2022. “Why am I seeing this? Deconstructing algorithm literacy through the lens of users,” Internet Research, volume 32, number 4, pp. 1,214–1,234.
doi: https://doi.org/10.1108/INTR-02-2021-0087, accessed 15 May 2024.

D. Shin, B. Zaid, F. Biocca, and A. Rasul, 2022. “In platforms we trust? Unlocking the black-box of news algorithms through interpretable AI,” Journal of Broadcasting & Electronic Media, volume 66, number 2, pp. 235–256.
doi: https://doi.org/10.1080/08838151.2022.2057984, accessed 15 May 2024.

J.J. Smith, L. Jayne, and R. Burke, 2022. “Recommender systems and algorithmic hate,” RecSys ’22: Proceedings of the 16th ACM Conference on Recommender Systems, pp. 592–597.
doi: https://doi.org/10.1145/3523227.3551480, accessed 15 May 2024.

S. Srivastava, 2023. “Algorithmic governance and the international politics of big tech,” Perspectives on Politics, volume 21, number 3, pp. 989–1,000.
doi: https://doi.org/10.1017/S1537592721003145, accessed 15 May 2024.

C. Starke, J. Baleis, B. Keller, and F. Marcinkowski, 2022. “Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature,” Big Data & Society (10 October).
doi: https://doi.org/10.1177/20539517221115189, accessed 15 May 2024.

A. Strowel and J. De Meyere, 2023. “The Digital Services Act: Transparency as an efficient tool to curb the spread of disinformation on online platforms?” Journal of Intellectual Property, Information Technology and Electronic Commerce Law, volume 14, number 1, pp. 66–83, and at https://www.jipitec.eu/archive/issues/jipitec-14-1-2023/5708/, accessed 15 May 2024.

M. Tabassum, T. Kosiński, A. Frik, N. Malkin, P. Wijesekera, S. Egelman, and H.R. Lipford, 2020. “Investigating users’ preferences and expectations for always-listening voice assistants,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, volume 3, number 4, article number 153, pp. 1–23.
doi: https://doi.org/10.1145/3369807, accessed 15 May 2024.

Tech Transparency Project (TTP), 2023. “Dangerous by design: YouTube leads young gamers to videos of guns, school shootings” (16 May), at https://www.techtransparencyproject.org/articles/youtube-leads-young-gamers-to-videos-of-guns-school, accessed 15 May 2024.

E.C. Teppan and M. Zanker, 2015. “Decision biases in recommender systems,” Journal of Internet Commerce, volume 14, number 2, pp. 255–275.
doi: https://doi.org/10.1080/15332861.2015.1018703, accessed 15 May 2024.

L. Terren and R. Borge, 2021. “Echo chambers on social media: A systematic review of the literature,” Review of Communication Research, volume 9, pp. 99–118.

N. Tintarev and J. Masthoff, 2007. “A survey of explanations in recommender systems,” 2007 IEEE 23rd International Conference on Data Engineering Workshop, pp. 801–810.
doi: https://doi.org/10.1109/ICDEW.2007.4401070, accessed 15 May 2024.

A. Tontodimamma, E. Nissi, A. Sarra, and L. Fontanella, 2021. “Thirty years of research into hate speech: Topics of interest and their evolution,” Scientometrics, volume 126, pp. 157–179.
doi: https://doi.org/10.1007/s11192-020-03737-6, accessed 15 May 2024.

E. Treyger, J. Taylor, D. Kim, and M.A. Holliday, 2023. “Assessing and suing an algorithm: Perceptions of algorithmic decision-making,” RAND Corporation, RR-A2100-1.
doi: https://doi.org/10.7249/RRA2100-1, accessed 15 May 2024.

C. Tsai and P. Brusilovsky, 2021. “The effects of controllability and explainability in a social recommender system,” User Modeling and User-Adapted Interaction, volume 31, number 4, pp. 591–627.
doi: https://doi.org/10.1007/s11257-020-09281-5, accessed 15 May 2024.

A. Turillazzi, M. Taddeo, L. Floridi, and F. Casolari, 2023. “The digital services act: An analysis of its ethical, legal, and social implications,” Law, Innovation and Technology, volume 15, number 1, pp. 83–106.
doi: https://doi.org/10.1080/17579961.2023.2184136, accessed 15 May 2024.

UNESCO, 2023. “Guidelines for the governance of digital platforms: Safeguarding freedom of expression and access to information through a multi-stakeholder approach,” at https://unesdoc.unesco.org/ark:/48223/pf0000387339, accessed 15 May 2024.

P. van Cleynenbreugel, 2023. “Digital services coordinators and other competent authorities in the Digital Services Act: Streamlined enforcement coordination lost?” European Law Blog (30 November), at https://europeanlawblog.eu/2023/11/30/digital-services-coordinators-and-other-competent-authorities-in-the-digital-services-act-streamlined-enforcement, accessed 15 May 2024.

E. van den Broeck, K. Poels, and M. Walrave, 2020. “How do users evaluate personalized Facebook advertising? An analysis of consumer- and advertiser controlled factors,” Qualitative Market Research, volume 23, number 2, pp. 309–327.
doi: https://doi.org/10.1108/QMR-10-2018-0125, accessed 15 May 2024.

J. van Hoboken, 2022. “European lessons in self-experimentation: From the GDPR to European platform regulation,” Centre for International Governance Innovation (20 June), at https://www.cigionline.org/articles/european-lessons-in-self-experimentation-from-the-gdpr-to-european-platform-regulation/, accessed 15 May 2024.

M. Veale, R. Binns, and L. Edwards, 2018. “Algorithms that remember: Model inversion attacks and data protection law,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, volume 376, number 2133 (28 November), pp. 1–15.
doi: https://doi.org/10.1098/rsta.2018.0083, accessed 15 May 2024.

A.E. Waldman, 2021. Industry unbound: The inside story of privacy, data, and corporate power. Cambridge: Cambridge University Press.
doi: https://doi.org/10.1017/9781108591386, accessed 15 May 2024.

J. Whittaker, S. Looney, A. Reed, and F. Votta, 2021. “Recommender systems and the amplification of extremist content,” Internet Policy Review, volume 10, number 2 (30 June).
doi: https://doi.org/10.14763/2021.2.1565, accessed 15 May 2024.

M. Yesilada and S. Lewandowsky, 2022. “Systematic review: YouTube recommendations and problematic content,” Internet Policy Review, volume 11 number 1 (31 March).
doi: https://doi.org/10.14763/2022.1.1652, accessed 15 May 2024.

S. Zuboff, 2020. “Caveat usor: “Surveillance capitalism as epistemic inequality,” In: K. Werbach (editor). After the digital tornado: Networks, slgorithms, humanity. Cambridge: Cambridge University Press, pp. 174–214.
doi: https://doi.org/10.1017/9781108610018, accessed 15 May 2024.

 


Editorial history

Received 8 February 2024; revised 18 April 2024; revised 14 May 2024; accepted 15 May 2024.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Mysterious and manipulative black boxes: A qualitative analysis of perceptions on recommender systems
by Jukka Ruohonen.
First Monday, Volume 29, Number 6 - 3 June 2024
https://firstmonday.org/ojs/index.php/fm/article/download/13357/11627
doi: https://dx.doi.org/10.5210/fm.v29i6.13357