First Monday

The coming age of adversarial social bot detection by Stefano Cresci, Marinella Petrocchi, Angelo Spognardi, and Stefano Tognazzi



Abstract
Social bots are automated accounts often involved in unethical or illegal activities. Academia has shown how these accounts evolve over time, becoming increasingly smart at hiding their true nature by disguising themselves as genuine accounts. If they evade, bots hunters adapt their solutions to find them: the cat and mouse game. Inspired by adversarial machine learning and computer security, we propose an adversarial and proactive approach to social bot detection, and we call scholars to arms, to shed light on this open and intriguing field of study.

Contents

Introduction
Machine learning in hostile environments
Bots, bots everywhere
The cat-and-mouse game
Changing the rules of the game
Adversarial social bot detection
Ethics of adversarial bot detection
A call to arms

 


 

Introduction

With the recent — relentless — rise of artificial intelligence (AI) and machine learning (ML), our lives are increasingly influenced by decisions taken on our behalf by automated systems. Indeed, while we indulge in our everyday activities, we face tens of decisions taken by all sorts of learning algorithms. When we turn to Apple Siri, Amazon Alexa, and Google Now, e.g., for speeding up a Web search or for organizing our daily schedules, we interact with powerful speech recognition algorithms. When we browse the latest products on sale on our favorite online shop, our activities and choices become the inputs for profiling algorithms, and the proffered products are the result of a plethora of automatic recommendations that aim at maximizing our purchases. As we communicate with our peers, we are proposed a set of people we may know, based on the result of some algorithm that analyzed the social networks of our friends. When we read about popular news stories on Facebook and Twitter, our news feed is personalized so as to show items that match our interests. Society and information security make no exception. Image recognition algorithms are widely used for intelligent transportation and for video surveillance throughout our cities, while e-mail spam, malware spreading, online frauds, and system intrusions are kept at bay by network traffic classification and anomaly detection algorithms.

These examples — just a few out of an immense spectrum of applications — highlight the usefulness of artificial intelligence for solving real-world tasks. A usefulness that is well motivated by the exceptional — even human-like — performances of recent ML algorithms. Notably, the vast majority of such algorithms are designed for operating in stationary and benign (or, at the very least, neutral) environments. In practice, however, there are many situations in which the working environment is neither stationary nor benign. Think for example of all the security applications of ML, where systems and data are protected from attackers that try to tamper with them. When the previous assumptions on the environment are violated, even the best-of-breed algorithms start making big mistakes. For this reason, a recent branch of ML research started focusing on the development of robust algorithms, capable of resisting to attackers (Goodfellow, et al., 2018).

 

++++++++++

Machine learning in hostile environments

Since the early days of ML, it is known that instances analyzed at ‘test’ time (i.e., when the ML model is deployed) might have somewhat different statistical properties than those used for training the model. This issue naturally occurs in many applications and results in degraded performances. For example, in a computer vision system, two different cameras could be used to take pictures at training and test time, thus leading to a trained model that does not perform well on test images (Goodfellow, et al., 2018). More worryingly however, an attacker could deliberately manipulate training or test instances, in order to cause the algorithm to make mistakes, as shown in Figure 1. Such modified inputs are called ‘adversarial examples’, being them generated by an adversary (Kurakin, et al., 2016).

 

Adversarial examples and their consequences, for a few notable ML tasks
 
Figure 1: Adversarial examples and their consequences, for a few notable ML tasks.

 

Adversarial machine learning (AML), also known as ‘machine learning in hostile environments’, is a paradigm that deals with the development of algorithms designed for withstanding the attacks of an adversary. It focuses on the study of possible attacks (e.g., how to modify an image in order to fool a computer vision system, as in Figure 1(a), or add some noise to a speech waveform to fool a speech recognition system, as in Figure 1(b)), and on the design of countermeasures to those attacks (e.g., smoothing the decision boundary of a classifier so as to make it more robust against adversarial examples), as shown in the right-hand side of Figure 1(c). That is, AML studies possible attacks with the goal of building more robust and more secure systems. The idea of considering the presence of adversaries for improving the robustness of existing systems has been embraced by the ML community only recently. Interestingly, the very same idea is instead a well consolidated practice in other fields that are — and have always been intrinsically adversarial, e.g., computer and information security. Actually, in security, it is very common to find papers describing new possible attacks against security protocols, without immediately proposing adequate countermeasures (Lowe, 1996).

Despite dating back to 2004–2005, AML gained momentum only recently and, in fact, Goodfellow, et al. (2018) consider the study of adversarial examples to be still in its infancy. Nonetheless, AML is currently regarded as one of the most promising directions for research on ML and other general AI approaches. The reasons for the great interest around this novel paradigm are partly due to the pervasiveness of the weaknesses it aims to overcome. It has been demonstrated that almost every ML system is potentially vulnerable to adversarial attacks, since adversarial examples often transfer from one model to another in such a way that attacks can be mounted by adversaries neither fully nor partially acquainted of the model’s characteristics (Kurakin, et al., 2016). A scenario that casts a dark omen on our increasingly automated socio-technical world.

Grounding on the compelling results recently obtained in AML, in the remainder we investigate the theoretical advantages and the practical usefulness of its application to a domain that is both societally and scientifically crucial — that is, social bot detection. Similarly to the scientific field of computer and information security from which social bot detection inherits many features, also the latter is intrinsically adversarial. Yet the adoption of the AML paradigm is almost unexplored in this domain.

 

++++++++++

Bots, bots everywhere

Undoubtedly, Social Media and Online Social Networks (OSNs) have a profound impact on our everyday life, giving voice to the crowds. However, their openness (e.g., support for programmatic access via APIs), ease of use, and aid for anonymity inevitably set the stage for the proliferation of automated accounts called social bots. In fact, social bots (short for software robots) are as old as OSNs themselves (Ferrara, et al., 2016). They are computer programs capable of automatically performing a wide array of actions, such as producing, resharing and liking content or even establishing and maintaining social interactions with other users.

Social bots are not malicious and dangerous by definition, some of them even being benign (de Lima Salge and Berente, 2017). Unfortunately, however, the vast majority actually is. They try to hide their automated nature, mimicking human behaviors, they are involved in shady and unethical activities and they often act in coordination with other bots, in order to magnify their impact on legitimate users. Social bots have been recently deemed responsible for some of the worst ailments of our online social ecosystems. Since 2014, they have reportedly tampered with online discussions about pretty much every major political election in Western countries (Cresci, 2020), including those about the 2016 U.K. Brexit referendum. Worryingly, recent studies provided evidence that social bots are involved in spreading misinformation and fake news (Shao, et al., 2018) and that they also tend to exacerbate online social conflicts, thus increasing polarization and, in many cases, leading to abusive and hateful speech in online debates (Stella, et al., 2018). Estimates conducted on Twitter report that, on average, social bots account for nine to 15 percent of total active platform users (Varol, et al., 2017), a considerable share indeed. And what is worse is that, in those communities where there are strong political or economical incentives, the ratio of bots dramatically increases. Lately, a study found out that as much as 71 percent of Twitter users involved in discussion spikes about stocks on sale in U.S. financial markets, are likely to be bots (Cresci, Lillo, et al., 2019). Moreover, in a large-scale crowdsourcing experiment, tech-savvy social media users proved unable to tell apart bots and legitimate users, 76 percent of the times (Cresci, et al., 2017). But how did we end up in such a nightmarish situation?

 

++++++++++

The cat-and-mouse game

It has been recently demonstrated that a crucial problem with bots is that they evolve over time, in order to evade established detection techniques (Cresci, et al., 2017). Any success at social bot detection, in turn, inevitably inspires future countermeasures by bot developers. Hence, newer bots often feature advanced characteristics that make them way harder to detect with respect to older ones. Because of this, social bot detection has always been a cat-and-mouse game in which a large, but unknown, number of human-like bots goes undetected. Looking back at over a decade of research and experimentation, we realize that this game always followed a reactive schema (Cresci, Petrocchi, et al., 2019). As shown in the left-hand side of Figure 2(b), this schema starts with an observation of suspicious behaviors in OSNs, which leads to a study of social bot activities. Such study is exploited to design new detection techniques. As soon as the new detection techniques are deployed, bot developers tweak the characteristics of their accounts, thus evading detection. This evolution therefore requires new observations of bot mischiefs to grasp the characteristics of the evolved bots. As a crucial consequence of this reactive schema, scholars and OSN administrators are constantly one step behind the bot developers. We spend a large slice of time trying to figure out the characteristics of the evolved bots, in order to cage them. If we stick to playing this cat-and-mouse game, social bot detection is bound to be a major open issue — with serious repercussions on our societies — for many years to come.

 

Benefits of adopting adversarial approaches when operating in hostile environments
 
Figure 2: Benefits of adopting adversarial approaches when operating in hostile environments. Both machine learning and social bot detection involve sequential R&D loops, where iterations may take a different amount of time according to the adopted R&D paradigm. Faster iterations mean that existing techniques can be improved, and new techniques can be developed, at a faster rate.

 

 

++++++++++

Changing the rules of the game

Playing the cat-and-mouse game as we have done until now clearly did not work the way we had hoped, and we are now left with online social ecosystems that are plagued by a multitude of malicious social bots. For many years, we leveraged the best learning algorithms and we devoted much effort into fighting social bots, as demonstrated by the deluge of papers on the topic (Cresci, 2020). Yet, it seems like we are nowhere near to having solved the problem. Actually, the situation has never been this serious, our very democracies endangered. So, why did we fail? Rather than blaming our poor performance in playing the cat-and-mouse game, we believe the root of the problem lies in the rules of the game themselves. It is the reactive nature of the war against bots that relegated us to a disadvantaged position, from which we did not recover. Thus, rather than multiplying our efforts in playing this inefficient game, we might be better off playing a different game at all.

Social bot detection violates the same fundamental assumptions that gave rise to AML, as demonstrated in Figures 1 and 2. Indeed, the task of social bot detection is neither stationary — because bots evolve over time; nor benign — because bots actively try to fool detectors. Figure 3 shows how recent progress in AML (Kurakin, et al., 2016), coupled with powerful behavioral models (Cao, 2010), give scholars — for the first time — a chance of changing the adverse rules of social bot detection to their favor. Instead of taking countermeasures only after having collected evidence of new bot mischiefs, we propose a new approach where techniques are proactive and able to anticipate attacks and the next generations of bots, as highlighted in the right-hand side of Figure 2(b). This new proactive approach begins by defining a model for OSN accounts. Existing models are able to leverage the behavior of an account, as well as the content it generates and its social graph (Cai, et al., 2017). The social bots under investigation are thus represented via the specified model. Their representation is the input of the simulation step. The high-level goal of the simulation step is to find variants of the input representations (i.e., variants of the bots) that satisfy a given criterion. Within the context of the paper, the criterion could be that of resembling the original representations, or of resembling real legitimate accounts. In other words, the simulation should produce representations of malicious social bots that do not exist (yet), but that appear as similar to existing ones. Then, every modified representation that comes out of the simulation is evaluated by means of a state-of-the-art detection technique, e.g., Cresci, et al. (2016). The evaluation aims at verifying whether the modified representations of the accounts are capable of evading detection. Those representations that evade detection are considered as possible threats and could be taken into account during the design and development of novel detection techniques. With regards to the broader field of AML, the evolved social bots, generated during the simulation step of the proactive approach, represent adversarial examples. Finding good adversarial examples can, in turn, help scholars understand the weaknesses of existing bot detection systems, before such weaknesses can be effectively exploited by bot developers. Similarly to traditional AML, in adversarial social bot detection we experiment with possible attacks and threats to the detection techniques, in an effort that will quickly make them more robust (Cresci, Petrocchi, et al., 2019).

 

The coming age of adversarial social bot detection
 
Figure 3: The coming age of adversarial social bot detection. Combined recent advances in adversarial machine learning, behavioral modeling, and social bot detection, give us — for the first time — the possibility to adopt an adversarial and proactive approach to the study and detection of social bots.

 

Although unlikely to completely defeat social bots, the application of the proposed adversarial and proactive approach would nonetheless bring groundbreaking benefits. The capability to foresee possible bot evolutions would not only allow to test the detection rate of state-of-the-art techniques, but also and foremost to a priori adapt them and even to design new detection techniques. As a consequence of the additional design and experimentation allowed by the proactive approach, many bot evolutions will be detected from day zero. Overall, bots will see their chances to harm severely restricted, with clear and immediate benefits for the online ecosystem, and ultimately, for our society (e.g., less fake news and propaganda).

 

++++++++++

Adversarial social bot detection

Although not under the explicit aegis of keywords such as adversarial learning and proactive detection, the first seeds for adversarial social bot detection were put by Yang, et al. back in 2011–2013. In a noteworthy work, they provided the first evidence of social bot evolution (Yang, et al., 2013). While the first wave of social bots, populating OSNs until around 2011, were rather simplistic, the second wave featured characteristics that were quite advanced for the time. Differently from the previous ones, the social bots studied by Yang, et al. were used to purchase or exchange followers between each other, in order to look more popular and credible. Moreover, they also used tools to automatically tweet many messages with the same meaning, but using different words. Other than studying the characteristics of those bots, Yang, et al. also developed a machine learning classifier, specifically devised for detecting evolving bots. By investigating how the second wave of social bots differentiated from the first one, and by proposing a countermeasure to such evolving bots, Yang, et al. developed the first ‘adversarial’ study towards social bot detection. By relying on sophisticated features considering accounts relationships, tweet timing, and level of automation, their classifier was a real success. However, evolution still went on, and while bot hunters climbed a post, bot developers climbed ten. Bot evolution thus leads us to 2016, when Ferrara, et al. documented the present situation, involving a third generation of social bots (Ferrara, et al., 2016). Needless to say, Yang’s classifier was no longer successful at detecting this third wave of social bots (Cresci, et al., 2017).

After Yang, et al.’s first adversarial work, many years passed before another of such studies was carried out. Indeed, only recently Cresci, Petrocchi, et al. (2019) and Grimme, et al. (2018) proposed new adversarial studies in social bot detection. Asking a key question — whether it is crucial to recognize if a single account is a bot or not, rather than detecting the strategies with which such accounts act — Grimme, et al. (2018) created tens of Twitter bots. For some weeks, their bots were fully automated, retweeting cryptocurrency-related tweets. Then, they also experimented with so-called cyborgs — that is, hybrid human- and software-operated accounts. During the course of the experiment, the boticity score of the accounts was measured by Botometer (Varol, et al., 2017), a popular bot detection service. Grimme, et al. (2018) found that their fully-automated bots were lying in a grey zone of the boticity score, meaning that Botometer could not reliably tell if those accounts were bots or not. Their cyborgs instead, were definitely featuring a low boticity score. This experiment exactly stages the cat-and-mouse game. The interplay between a high level of automation, a characteristic traditionally connected to social bots, and features usually coupled with human behavior, brings state-of-the-art, well-established detection techniques out of the way.

The two previous examples provide evidence of germinal works in adversarial social bot detection. Furthermore, they also put in the spotlight the challenges related to malicious accounts, be they bots or cyborgs, evading even the best-of-breed detection techniques. However, both detection techniques used in Grimme, et al. (2018) and Yang, et al. (2013) were developed as part of the traditional reactive schema. Interestingly, a few recent work moved forward and experimented with working implementations of the adversarial social bot detection approach, adhering to the proactive schema of Figure 2(b). In particular, Cresci, Petrocchi, et al. (2019) implemented each step of the adversarial approach, as described in Figure 4. They propose to model OSN accounts via the so-called digital DNA (Cresci, et al., 2016), a behavioral modeling technique where the lifetime of an account is encoded as a sequence of characters, corresponding to the sequence of actions performed by the account. Having a DNA-based representation of OSN accounts, they then propose to simulate possible social bot evolutions and bot behaviors by employing genetic algorithms. A customized genetic algorithm modifies the digital DNA of current social bots, iteratively selecting the ‘best’ bot evolutions, so as to converge towards social bots that resemble the behavioral characteristics of legitimate accounts. Next, they evaluate evolved bots resulting from the simulation step, by classifying them with state-of-the-art bot detection techniques. Notably, results highlighted that many of the evolved bots managed to evade detection. The bot evolution simulated in Cresci, Petrocchi, et al. (2019) caused a steep performance drop to a state-of-the-art bot classifier, whose accuracy plummeted from 0.9 to 0.5. This experiment, although limited to one modeling technique and a few detection techniques, thus managed to reproduce the evolutionary behavior of social bots which, until now, we could only passively observe. More importantly, a study of the characteristics of the evolved bots that evade detection also highlighted space for improvement in current bot detection techniques, thus demonstrating the usefulness of adversarial social bot detection. Another, more general, example of adversarial social bot detection is discussed in Wu, et al. (2020). This more recent endeavor leverages generative adversarial networks (GANs) for the simulation step, thus going beyond the limitations of DNA-like account representations and exploiting the power of deep neural networks. In Wu, et al. (2020), a GAN is used for artificially creating a large number of adversarial bot examples (i.e., simulated evolved bots) that the authors subsequently exploit to train downstream bot detectors. Results demonstrated that this approach allowed to augment the training phase of the bot detector, since it received many more examples of evolved and sophisticated bots, thus significantly boosting its detection performance.

 

A practical example of adversarial social bot detection
 
Figure 4: A practical example of adversarial social bot detection. In this instantiation of the proactive macro-analytical process, legitimate and bot accounts are modeled via digital DNA. Then, possible evolutions of social bots are simulated via genetic algorithms. Such simulations may give rise to previously unseen evolved bots. Subsequently, evolved bots are compared with legitimate accounts. Those that cannot be distinguished from legitimate accounts, according to existing detection techniques, are considered threats. Finally, existing techniques are strengthened or new detection techniques are designed in order to detect the newly identified threats.

 

 

++++++++++

Ethics of adversarial bot detection

While simplifying many everyday tasks, the widespread adoption of AI is also causing new problems such as algorithmic filtering, algorithmic bias, and current limits in explainability of predictive models, that are raising serious ethical concerns. Within this context, algorithmic approaches to the study, development, and detection of social bots, make no exception (de Lima Salge and Berente, 2017). For instance, one might naively think that all scientific endeavors devoted to the development of social bots — that are part of the adversarial approach to social bot detection — are to be blamed, since they are likely to give an advantage to bot developers.

Interestingly, a recent viewpoint paper on ethics and social bots (de Lima Salge and Berente, 2017), can help us understand ethical implications of adversarial social bot detection. By accounting for current laws, moral norms, and duties, de Lima Salge and Berente (2017) conclude that research and development of social bots should not always be considered unethical. This is particularly true for those cases, such as that of adversarial social bot detection, when tampering with bots is needed in order to achieve a greater good. In our case, the greater good is that of online social ecosystems cleansed from malicious bots.

Ethical concerns of adversarial social bot detection can also be ruled out by examining common practices in the related scientific fields of security and machine learning. As already anticipated, describing successful attacks to systems and protocols is a totally accepted practice in security (Lowe, 1996). Actually, since many years it is considered as one of the most valuable contributions to the field, because it allows to rapidly strengthen existing systems. By learning from experiences in security, we could say that while one single effort in experimenting with possible attacks could result in an advantage to attackers (e.g., bot developers), an organized scientific effort instead ultimately ends up supporting defenders (e.g., bot hunters). Similarly, the flourish of adversarial studies in the field of machine learning also resulted in better algorithms, that are more accurate and more robust to possible attacks (Goodfellow, et al., 2018).

 

++++++++++

A call to arms

Social bot detection is at a turning point. So far, bot results in evading detection and their constant proliferation demonstrate that fighting by means of a reactive schema is fighting a losing battle. Quoting the famous poet Robert Frost, ‘two roads diverged in a wood’ and it is time to take ‘the one less traveled by’. That road cannot be walked alone. Promising initial results suggest that an adversarial approach can be effective to proactively face the plague of ever-evolving social bots. However, such results are indeed inceptive. The adversarial social bot detection vision embraces a bigger picture that cannot be limited to evaluating its effectiveness with just one modeling technique and a few detection algorithms, as done until now. We thus call for a community effort to experiment with the adversarial and proactive schema, by applying it to diverse datasets, leveraging a broad array of modeling and detection techniques.

Despite the attractiveness of walking this unexplored direction, this road will probably be full of obstacles, with many open challenges to face. Above all, the implementation of the adversarial approach without the constraint of using a DNA-based account model. Future efforts must also take into account profile, network, and content characteristics of social bots, thus providing multi-layered analyses. Finally, a key step in the adversarial schema, namely the design step, is still at its dawn. To date, there is no automated process that learns how to improve a detection technique that fails to recognize evolved bots. As of now, this task still requires the skills and creativity of the designer, to spot why such evolved bots are effective at evading detection. Consequently, the designer must act accordingly to adapt the technique to be more robust. Covering different models and automatizing the proaction are challenging endeavors, for which we call scholars and practitioners to join the effort. End of article

 

About the authors

Dr. Stefano Cresci is a researcher at IIT-CNR, Italy. His interests broadly fall at the intersection of Web science and data science, with a focus on social media analysis, social bot detection, and crisis informatics. Stefano received a Ph.D. in Information Engineering from the University of Pisa. He is member of IEEE and ACM.
E-mail: s [dot] cresci [at] iit [dot] cnr [dot] it

Dr. Marinella Petrocchi is a researcher at IIT-CNR, Italy and collaborates with IMT Scuola Alti Studi Lucca, Italy. Her interests focus on detection techniques able to unveil fake online accounts and fake online reviews. Currently, she participates to the Integrated Activity Project TOFFFe (TOols for Fighting FakEs) — https://toffee.imtlucca.it/home — and to the H2020 European Cybersecurity Competence Network SPARTA.
E-mail: m [dot] petrocchi [at] iit [dot] cnr [dot] it

Prof. Angelo Spognardi is an associate professor at the Department of Computer Science, Sapienza University of Rome, Italy. His main research interests are on social networks modeling and analysis, network protocol security, and privacy.
E-mail: spognardi [at] di [dot] uniroma1 [dot] it

Dr. Stefano Tognazzi is a postdoc at the University of Konstanz, Germany, working on Model Reduction Techniques at the Centre for the Advanced Study of Collective Behaviour. He obtained a Master’s degree in computer science at the University of Udine with a thesis on logic programming and a Ph.D. in computer science and systems engineering at IMT School for Advanced Studies Lucca, with a thesis on model reduction techniques.
E-mail: stefano [dot] tognazzi [at] uni-konstanz [dot] de

 

References

C. Cai, L. Li, and D. Zeng, 2017. “Detecting social bots by jointly modeling deep behavior and content information,” CIKM ’17: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1,995–1,998.
doi: https://doi.org/10.1145/3132847.3133050, access 10 May 2021.

L. Cao, 2010. “In-depth behavior understanding and use: The behavior informatics approach,” Information Sciences, volume 180, number 17, pp. 3,067–3,085.
doi: https://doi.org/10.1016/j.ins.2010.03.025, access 10 May 2021.

S. Cresci, 2020. “A decade of social bot detection,” Communications of the ACM, volume 63, number 10, pp. 72–83.
doi: https://doi.org/10.1145/3409116, access 10 May 2021.

S. Cresci, M. Petrocchi, A. Spognardi, and S. Tognazzi, 2019. “Better safe than sorry: An adversarial approach to improve social bot detection,” WebSci ’19: Proceedings of the 10th ACM Conference on Web Science, pp. 47–56.
doi: https://doi.org/10.1145/3292522.3326030, access 10 May 2021.

S. Cresci, F. Lillo, D. Regoli, S. Tardelli, and M. Tesconi, 2019. “Cashtag piggybacking: Uncovering spam and bot activity in stock microblogs on Twitter,” ACM Transactions on the Web, article number 11.
doi: https://doi.org/10.1145/3313184, access 10 May 2021.

S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi, and M. Tesconi, 2017. “The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race,” WWW ’17 Companion: Proceedings of the 26th International Conference on World Wide Web Companion, pp. 963–972.
doi: https://doi.org/10.1145/3041021.3055135, access 10 May 2021.

S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi, and M. Tesconi, 2016. “DNA-inspired online behavioral modeling and its application to spambot detection,” IEEE Intelligent Systems, volume 31, number 5, pp. 58–64.
doi: https://doi.org/10.1109/MIS.2016.29, access 10 May 2021.

C.A. de Lima Salge and N. Berente, 2017. “Is that social bot behaving unethically?” Communications of the ACM, volume 60, number 9, pp. 29–31.
doi: https://doi.org/10.1145/3126492, access 10 May 2021.

E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, 2016. “The rise of social bots,” Communications of the ACM, volume 59, number 7, pp. 96–104.
doi: https://doi.org/10.1145/2818717, access 10 May 2021.

I. Goodfellow, P. McDaniel, and N. Papernot, 2018. “Making machine learning robust against adversarial inputs,” Communications of the ACM, volume 61, number 7, pp. 56–66.
doi: https://doi.org/10.1145/3134599, access 10 May 2021.

C. Grimme, D. Assenmacher, and L. Adam, 2018. “Changing perspectives: Is it sufficient to detect social bots?” In: G. Meiselwitz (editor). Social computing and social media: User experience and behavior. Lecture Notes in Computer Science, volume 10913. Cham, Switzerland: Springer, pp. 445–461.
doi: https://doi.org/10.1007/978-3-319-91521-0_32, access 10 May 2021.

A. Kurakin, I. Goodfellow, and S. Bengio, 2016. “Adversarial machine learning at scale,” arXiv: 1611.01236 (4 November), at https://arxiv.org/abs/1611.01236, access 10 May 2021.

G. Lowe, 1996. “Some new attacks upon security protocols,” Proceedings Ninth IEEE Computer Security Foundations Workshop, pp. 162–169.
doi: https://doi.org/10.1109/CSFW.1996.503701, access 10 May 2021.

C. Shao, G.L. Ciampaglia, O. Varol, K.-C. Yang, A. Flammini, and F. Menczer, 2018. “The spread of low-credibility content by social bots,” Nature Communications, volume 9, article number 4787 (20 November).
doi: https://doi.org/10.1038/s41467-018-06930-7, access 10 May 2021.

M. Stella, E. Ferrara, and M. De Domenico, 2018. “Bots increase exposure to negative and inflammatory content in online social systems,” Proceedings of the National Academy of Sciences, volume 115, number 49 (20 November), pp. 12,435–12,440.
doi: https://doi.org/10.1073/pnas.1803470115, access 10 May 2021.

O. Varol, E. Ferrara, C. Davis, F. Menczer, and A. Flammini, 2017. “Online human-bot interactions: Detection, estimation, and characterization,” Proceedings of the International AAAI Conference on Web and Social Media, volume 11, at https://ojs.aaai.org/index.php/ICWSM/article/view/14871, access 10 May 2021.

B. Wu, L. Liu, Y. Yang, K. Zheng, and X. Wang, 2020. “Using improved conditional generative adversarial networks to detect social bots on Twitter,” IEEE Access, volume 8, pp. 36,664–36,680.
doi: https://doi.org/10.1109/ACCESS.2020.2975630, access 10 May 2021.

C. Yang, R. Harkreader, and G. Gu, 2013. “Empirical evaluation and new design for fighting evolving Twitter spammers,” IEEE Transactions on Information Forensics and Security, volume 8, number 8, pp. 1,280–1,293.
doi: https://doi.org/10.1109/TIFS.2013.2267732, access 10 May 2021.

 


Editorial history

Received 1 December 2020; accepted 26 April 2021.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

The coming age of adversarial social bot detection
by Stefano Cresci, Marinella Petrocchi, Angelo Spognardi, and Stefano Tognazzi.
First Monday, Volume 26, Number 6 - 7 June 2021
https://firstmonday.org/ojs/index.php/fm/article/download/11474/10139
doi: http://dx.doi.org/10.5210/fm.v26i6.11474