Opaque algorithms, transparent biases: Automated content moderation during the Sheikh Jarrah Crisis


  • Norah Abokhodair
  • Yarden Skop
  • Sarah Rüller
  • Konstantin Aal
  • Houda Elmimouni




Social media platforms, while influential tools for human rights activism, free speech, and mobilization, also bear the influence of corporate ownership and commercial interests. This dual character can lead to clashing interests in the operations of these platforms. This study centers on the May 2021 Sheikh Jarrah events in East Jerusalem, a focal point in the Israeli-Palestinian conflict that garnered global attention. During this period, Palestinian activists and their allies observed and encountered a notable increase in automated content moderation actions, like shadow banning and content removal. We surveyed 201 users who faced content moderation and conducted 12 interviews with political influencers to assess the impact of these practices on activism. Our analysis centers on automated content moderation and transparency, investigating how users and activists perceive the content moderation systems employed by social media platforms, and their opacity. Findings reveal perceived censorship by pro-Palestinian activists due to opaque and obfuscated technological mechanisms of content demotion, complicating harm substantiation and lack of redress mechanisms. We view this difficulty as part of algorithmic harms, in the realm of automated content moderation. This dynamic has far-reaching implications for activism’s future and it raises questions about power centralization in digital spaces.




How to Cite

Abokhodair, N., Skop, Y., Rüller, S., Aal, K., & Elmimouni, H. (2024). Opaque algorithms, transparent biases: Automated content moderation during the Sheikh Jarrah Crisis. First Monday, 29(4). https://doi.org/10.5210/fm.v29i4.13620