{"title":"对社交网络上不可靠账户的实用检测","authors":"Nuno Guimarães , Álvaro Figueira , Luís Torgo","doi":"10.1016/j.osnem.2021.100152","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, the problem of unreliable content in social networks has become a major threat, with a proven real-world impact in events like elections and pandemics, undermining democracy and trust in science, respectively. Research in this domain has focused not only on the content but also on the accounts that propagate it, with the bot detection task having been thoroughly studied. However, not all bot accounts work as unreliable content spreaders (p.e. bot for news aggregation), and not all human accounts are necessarily reliable. In this study, we try to distinguish unreliable from reliable accounts, independently of how they are operated. In addition, we work towards providing a methodology capable of coping with real-world situations by introducing the content available (restricting it by volume- and time-based batches) as a parameter of the methodology. Experiments conducted on a validation set with a different number of tweets per account provide evidence that our proposed solution produces an increase of up to 20% in performance when compared with traditional (individual) models and with cross-batch models (which perform better with different batches of tweets).</p></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.osnem.2021.100152","citationCount":"1","resultStr":"{\"title\":\"Towards a pragmatic detection of unreliable accounts on social networks\",\"authors\":\"Nuno Guimarães , Álvaro Figueira , Luís Torgo\",\"doi\":\"10.1016/j.osnem.2021.100152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In recent years, the problem of unreliable content in social networks has become a major threat, with a proven real-world impact in events like elections and pandemics, undermining democracy and trust in science, respectively. Research in this domain has focused not only on the content but also on the accounts that propagate it, with the bot detection task having been thoroughly studied. However, not all bot accounts work as unreliable content spreaders (p.e. bot for news aggregation), and not all human accounts are necessarily reliable. In this study, we try to distinguish unreliable from reliable accounts, independently of how they are operated. In addition, we work towards providing a methodology capable of coping with real-world situations by introducing the content available (restricting it by volume- and time-based batches) as a parameter of the methodology. Experiments conducted on a validation set with a different number of tweets per account provide evidence that our proposed solution produces an increase of up to 20% in performance when compared with traditional (individual) models and with cross-batch models (which perform better with different batches of tweets).</p></div>\",\"PeriodicalId\":52228,\"journal\":{\"name\":\"Online Social Networks and Media\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1016/j.osnem.2021.100152\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Social Networks and Media\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468696421000343\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696421000343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
Towards a pragmatic detection of unreliable accounts on social networks
In recent years, the problem of unreliable content in social networks has become a major threat, with a proven real-world impact in events like elections and pandemics, undermining democracy and trust in science, respectively. Research in this domain has focused not only on the content but also on the accounts that propagate it, with the bot detection task having been thoroughly studied. However, not all bot accounts work as unreliable content spreaders (p.e. bot for news aggregation), and not all human accounts are necessarily reliable. In this study, we try to distinguish unreliable from reliable accounts, independently of how they are operated. In addition, we work towards providing a methodology capable of coping with real-world situations by introducing the content available (restricting it by volume- and time-based batches) as a parameter of the methodology. Experiments conducted on a validation set with a different number of tweets per account provide evidence that our proposed solution produces an increase of up to 20% in performance when compared with traditional (individual) models and with cross-batch models (which perform better with different batches of tweets).