{"title":"Distinguishing Real Web Crawlers from Fakes: Googlebot Example","authors":"Nilani Algiryage","doi":"10.1109/MERCON.2018.8421894","DOIUrl":null,"url":null,"abstract":"Web crawlers are programs or automated scripts that scan web pages methodically to create indexes. Search engines such as Google, Bing use crawlers in order to provide web surfers with relevant information. Today there are also many crawlers that impersonate well-known web crawlers. For example, it has been observed that Google’s Googlebot crawler is impersonated to a high degree. This raises ethical and security concerns as they can potentially be used for malicious purposes. In this paper, we present an effective methodology to detect fake Googlebot crawlers by analyzing web access logs. We propose using Markov chain models to learn profiles of real and fake Googlebots based on their patterns of web resource access sequences. We have calculated log-odds ratios for a given set of crawler sessions and our results show that the higher the log-odds score, the higher the probability that a given sequence comes from the real Googlebot. Experimental results show, at a threshold log-odds score we can distinguish the real Googlebot from the fake.","PeriodicalId":6603,"journal":{"name":"2018 Moratuwa Engineering Research Conference (MERCon)","volume":"214 1","pages":"13-18"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Moratuwa Engineering Research Conference (MERCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MERCON.2018.8421894","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Web crawlers are programs or automated scripts that scan web pages methodically to create indexes. Search engines such as Google, Bing use crawlers in order to provide web surfers with relevant information. Today there are also many crawlers that impersonate well-known web crawlers. For example, it has been observed that Google’s Googlebot crawler is impersonated to a high degree. This raises ethical and security concerns as they can potentially be used for malicious purposes. In this paper, we present an effective methodology to detect fake Googlebot crawlers by analyzing web access logs. We propose using Markov chain models to learn profiles of real and fake Googlebots based on their patterns of web resource access sequences. We have calculated log-odds ratios for a given set of crawler sessions and our results show that the higher the log-odds score, the higher the probability that a given sequence comes from the real Googlebot. Experimental results show, at a threshold log-odds score we can distinguish the real Googlebot from the fake.