The detection of automated accounts, also known as “social bots”, has been an important concern for online social networks (OSNs). While several methods have been proposed for detecting social bots, significant research gaps remain. First, current models exhibit limitations in detecting sophisticated bots that aim to mimic genuine OSN users. Second, these methods often rely on simplistic profile features, which are susceptible to adversarial manipulation. In addition, these models lack generalizability, resulting in subpar performance when trained on one dataset and tested on another.
To address these challenges, we propose a framework for social Bot detection with Self-Supervised Contrastive Learning (BotSSCL). Our framework leverages contrastive learning to distinguish between social bots and humans in the embedding space to improve linear separability. The high-level representations derived by BotSSCL enhance its resilience to variations in data distribution and ensure generalizability. We evaluate BotSSCL’s robustness against adversarial attempts to manipulate bot accounts to evade detection. Experiments on two datasets featuring sophisticated bots demonstrate that BotSSCL outperforms other supervised, unsupervised, and self-supervised baseline methods. We achieve and higher (F1) performance than SOTA on both datasets. In addition, BotSSCL also achieves 67% F1 when trained on one dataset and tested with another, demonstrating its generalizability under cross-botnet evaluation. Lastly, under adversarial evasion attack, BotSSCL shows increased complexity for the adversary and only allows 4% success to the adversary in evading detection. The code is available at https://github.com/code4thispaper/BotSSCL.
扫码关注我们
求助内容:
应助结果提醒方式:
