Pub Date : 2008-12-08DOI: 10.1109/ECRIME.2008.4696967
Danesh Irani, Steve Webb, Jonathon T. Giffin, C. Pu
We study the evolution of phishing email messages in a corpus of over 380,000 phishing messages collected from August 2006 to December 2007. Our first result is a classification of phishing messages into two groups: flash attacks and non-flash attacks. Phishing message producers try to extend the usefulness of a phishing message by reusing the same message. In some cases this is done by sending a large volume of phishing messages over a short period of time (flash-attack) versus the same phishing message spread over a relatively longer period (nonflash attacks). Our second result is a corresponding classification of phishing features into two groups: transitory features and pervasive features. Features which are present in a few attacks and have a relatively short life span (transitory) are generally strong indicators of phishing, whereas features which are present in most of the attacks and have a long life span (pervasive) are generally weak selectors of phishing. One explanation of this is that phishing message producers limit the utility of transitory features in time (by avoiding them in future generations of phishing) and limit the utility of pervasive features by choosing features that also appear in legitimate messages. While useful in improving the understanding of phishing messages, our results also show the need for further study.
{"title":"Evolutionary study of phishing","authors":"Danesh Irani, Steve Webb, Jonathon T. Giffin, C. Pu","doi":"10.1109/ECRIME.2008.4696967","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696967","url":null,"abstract":"We study the evolution of phishing email messages in a corpus of over 380,000 phishing messages collected from August 2006 to December 2007. Our first result is a classification of phishing messages into two groups: flash attacks and non-flash attacks. Phishing message producers try to extend the usefulness of a phishing message by reusing the same message. In some cases this is done by sending a large volume of phishing messages over a short period of time (flash-attack) versus the same phishing message spread over a relatively longer period (nonflash attacks). Our second result is a corresponding classification of phishing features into two groups: transitory features and pervasive features. Features which are present in a few attacks and have a relatively short life span (transitory) are generally strong indicators of phishing, whereas features which are present in most of the attacks and have a long life span (pervasive) are generally weak selectors of phishing. One explanation of this is that phishing message producers limit the utility of transitory features in time (by avoiding them in future generations of phishing) and limit the utility of pervasive features by choosing features that also appear in legitimate messages. While useful in improving the understanding of phishing messages, our results also show the need for further study.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133563380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-08DOI: 10.1109/ECRIME.2008.4696968
T. Moore, R. Clayton
A key way in which banks mitigate the effects of phishing is to have fraudulent websites removed or abusive domain names suspended. This dasiatake-downpsila is often subcontracted to specialist companies. We analyse six months of dasiafeedspsila of phishing Website URLs from multiple sources, including two such companies. We demonstrate that in each case huge numbers of Websites may be known to others, but the company with the take-down contract remains unaware of them, or only belatedly learns that they exist. We monitored all of the Websites to determine when they were removed and calculate the resultant increase in lifetimes from the take-down company not knowing that they should act. The results categorically demonstrate that significant amounts of money are being put at risk by the failure to share proprietary feeds of URLs. We analyse the incentives that prevent data sharing by take-down companies, contrasting this with the anti-virus industry - where sharing prevails - and with schemes for purchasing vulnerability information, where information about attacks is kept proprietary. We conclude by recommending that the defenders of phishing attacks start cooperatively sharing all of their data about phishing URLs with each other.
{"title":"The consequence of non-cooperation in the fight against phishing","authors":"T. Moore, R. Clayton","doi":"10.1109/ECRIME.2008.4696968","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696968","url":null,"abstract":"A key way in which banks mitigate the effects of phishing is to have fraudulent websites removed or abusive domain names suspended. This dasiatake-downpsila is often subcontracted to specialist companies. We analyse six months of dasiafeedspsila of phishing Website URLs from multiple sources, including two such companies. We demonstrate that in each case huge numbers of Websites may be known to others, but the company with the take-down contract remains unaware of them, or only belatedly learns that they exist. We monitored all of the Websites to determine when they were removed and calculate the resultant increase in lifetimes from the take-down company not knowing that they should act. The results categorically demonstrate that significant amounts of money are being put at risk by the failure to share proprietary feeds of URLs. We analyse the incentives that prevent data sharing by take-down companies, contrasting this with the anti-virus industry - where sharing prevails - and with schemes for purchasing vulnerability information, where information about attacks is kept proprietary. We conclude by recommending that the defenders of phishing attacks start cooperatively sharing all of their data about phishing URLs with each other.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114354740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-08DOI: 10.1109/ECRIME.2008.4696965
Saeed Abu-Nimeh, D. Nappa, Xinlei Wang, S. Nair
With the variety of applications in mobile devices, such devices are no longer deemed calling gadgets merely. Various applications are used to browse the Internet, thus access financial data, and store sensitive personal information. In consequence, mobile devices are exposed to several types of attacks. Specifically, phishing attacks can easily take advantage of the limited or lack of security and defense applications therein. Furthermore, the limited power, storage, and processing capabilities render machine learning techniques inapt to classify phishing and spam emails in such devices. The present study proposes a distributed architecture hinging on machine learning approaches to detect phishing emails in a mobile environment based on a modified version of Bayesian additive regression trees (BART). Apparently, BART suffers from high computational time and memory overhead, therefore, distributed algorithms are proposed to accommodate detection applications in resource constrained wireless environments.
{"title":"A distributed architecture for phishing detection using Bayesian Additive Regression Trees","authors":"Saeed Abu-Nimeh, D. Nappa, Xinlei Wang, S. Nair","doi":"10.1109/ECRIME.2008.4696965","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696965","url":null,"abstract":"With the variety of applications in mobile devices, such devices are no longer deemed calling gadgets merely. Various applications are used to browse the Internet, thus access financial data, and store sensitive personal information. In consequence, mobile devices are exposed to several types of attacks. Specifically, phishing attacks can easily take advantage of the limited or lack of security and defense applications therein. Furthermore, the limited power, storage, and processing capabilities render machine learning techniques inapt to classify phishing and spam emails in such devices. The present study proposes a distributed architecture hinging on machine learning approaches to detect phishing emails in a mobile environment based on a modified version of Bayesian additive regression trees (BART). Apparently, BART suffers from high computational time and memory overhead, therefore, distributed algorithms are proposed to accommodate detection applications in resource constrained wireless environments.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131877286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-08DOI: 10.1109/ECRIME.2008.4696970
P. Kumaraguru, Steve Sheng, A. Acquisti, L. Cranor, Jason I. Hong
Prior laboratory studies have shown that PhishGuru, an embedded training system, is an effective way to teach users to identify phishing scams. PhishGuru users are sent simulated phishing attacks and trained after they fall for the attacks. In this current study, we extend the PhishGuru methodology to train users about spear phishing and test it in a real world setting with employees of a Portuguese company. Our results demonstrate that the findings of PhishGuru laboratory studies do indeed hold up in a real world deployment. Specifically, the results from the field study showed that a large percentage of people who clicked on links in simulated emails proceeded to give some form of personal information to fake phishing websites, and that participants who received PhishGuru training were significantly less likely to fall for subsequent simulated phishing attacks one week later. This paper also presents some additional new findings. First, people trained with spear phishing training material did not make better decisions in identifying spear phishing emails compared to people trained with generic training material. Second, we observed that PhishGuru training could be effective in training other people in the organization who did not receive training messages directly from the system. Third, we also observed that employees in technical jobs were not different from employees with non-technical jobs in identifying phishing emails before and after the training. We conclude with some lessons that we learned in conducting the real world study.
{"title":"Lessons from a real world evaluation of anti-phishing training","authors":"P. Kumaraguru, Steve Sheng, A. Acquisti, L. Cranor, Jason I. Hong","doi":"10.1109/ECRIME.2008.4696970","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696970","url":null,"abstract":"Prior laboratory studies have shown that PhishGuru, an embedded training system, is an effective way to teach users to identify phishing scams. PhishGuru users are sent simulated phishing attacks and trained after they fall for the attacks. In this current study, we extend the PhishGuru methodology to train users about spear phishing and test it in a real world setting with employees of a Portuguese company. Our results demonstrate that the findings of PhishGuru laboratory studies do indeed hold up in a real world deployment. Specifically, the results from the field study showed that a large percentage of people who clicked on links in simulated emails proceeded to give some form of personal information to fake phishing websites, and that participants who received PhishGuru training were significantly less likely to fall for subsequent simulated phishing attacks one week later. This paper also presents some additional new findings. First, people trained with spear phishing training material did not make better decisions in identifying spear phishing emails compared to people trained with generic training material. Second, we observed that PhishGuru training could be effective in training other people in the organization who did not receive training messages directly from the system. Third, we also observed that employees in technical jobs were not different from employees with non-technical jobs in identifying phishing emails before and after the training. We conclude with some lessons that we learned in conducting the real world study.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117151386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-08DOI: 10.1109/ECRIME.2008.4696966
Malte Hesse, N. Pohlmann
The Internet is consisting of autonomous systems each managed by individual and rival organizations, which makes it very difficult to capture as a whole. Internet situation awareness can be accomplished by creating a common basis for private and public operators to monitor their networks. Thus, an overlay monitoring layer is needed, which can be utilized to address a very important aspect for a more secure and trustworthy Internet. This is the need of various stakeholders to have the information they need to perform their decision tasks in a reliable fashion. This can be accomplished by offering them a common smart approach and the additional benefit of a global view, which they can use to compare their local situation with. This smart approach should utilize well proven existing global statistics, best practices and existing technical sensors, which can be adapted to the overall common framework. From this, output for all relevant stakeholders, like national assessments centers, can be generated to fulfill the individual needs. One possible input source could be the technical sensor technology, which has been developed by our Institute for Internet security and which we give to partners and other researchers free of charge. It is a great basis for an Internet situation awareness, since it is a well proven system, which has been in operation for a couple of years, and since it can easily be adapted by our developers to comply with the overall framework. The great advantages are in addition (i) that it is privacy compliant by design and (ii) can offer high performance with the (iii) capability for long time storage of the collected raw data. Using raw data collected at various positions of the Internet infrastructure, we aim to generate a continuous global view of the current state of the Internet, which can be utilized as input for the Internet situation awareness.
{"title":"Internet Situation Awareness","authors":"Malte Hesse, N. Pohlmann","doi":"10.1109/ECRIME.2008.4696966","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696966","url":null,"abstract":"The Internet is consisting of autonomous systems each managed by individual and rival organizations, which makes it very difficult to capture as a whole. Internet situation awareness can be accomplished by creating a common basis for private and public operators to monitor their networks. Thus, an overlay monitoring layer is needed, which can be utilized to address a very important aspect for a more secure and trustworthy Internet. This is the need of various stakeholders to have the information they need to perform their decision tasks in a reliable fashion. This can be accomplished by offering them a common smart approach and the additional benefit of a global view, which they can use to compare their local situation with. This smart approach should utilize well proven existing global statistics, best practices and existing technical sensors, which can be adapted to the overall common framework. From this, output for all relevant stakeholders, like national assessments centers, can be generated to fulfill the individual needs. One possible input source could be the technical sensor technology, which has been developed by our Institute for Internet security and which we give to partners and other researchers free of charge. It is a great basis for an Internet situation awareness, since it is a well proven system, which has been in operation for a couple of years, and since it can easily be adapted by our developers to comply with the overall framework. The great advantages are in addition (i) that it is privacy compliant by design and (ii) can offer high performance with the (iii) capability for long time storage of the collected raw data. Using raw data collected at various positions of the Internet infrastructure, we aim to generate a continuous global view of the current state of the Internet, which can be utilized as input for the Internet situation awareness.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127842249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-08DOI: 10.1109/ECRIME.2008.4696969
Steven Myers, Sid Stamm
The vulnerability of home routers has been widely discussed, but there has been significant skepticism in many quarters about the viability of using them to perform damaging attacks. Others have argued that traditional malware prevention technologies will function for routers. In this paper we show how easily and effectively a home router can be repurposed to perform a mid-stream script injection attack. This attack transparently and indiscriminately siphons off many cases of user entered form-data from arbitrary (non-encrypted) Web-sites, including usernames and passwords. Additionally, the attack can take place over a long period of time affecting the user at a large number of sites allowing a userpsilas information to be easily correlated by one attacker. The script injection attack is performed through malware placed on an insecure home router, between the client and server. We implemented the attack on a commonly deployed home router to demonstrate its realizability and potential. Next, we propose and implement efficient countermeasures to discourage or prevent both our attack and other Web targeted script injection attacks. The countermeasures are a form of short-term tamper-prevention based on obfuscation and cryptographic hashing. It takes advantage of the fact that Web scripts are both delivered and interpreted on demand. Rather than preventing the possibility of attack altogether, they simply raise the cost of the attack to make it non-profitable thus removing the incentive to attack in the first place. These countermeasures are robust and practically deployable: they permit caching, are deployed server-side, but push most of the computational effort to the client. Further, the countermeasures do not require the modification of browsers or Internet standards. Further, they do not require cryptographic certificates or frequent expensive cryptographic operations, a stumbling block for the proper deployment of SSL on many Web-servers run by small to medium-sized businesses.
{"title":"Practice & prevention of home-router mid-stream injection attacks","authors":"Steven Myers, Sid Stamm","doi":"10.1109/ECRIME.2008.4696969","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696969","url":null,"abstract":"The vulnerability of home routers has been widely discussed, but there has been significant skepticism in many quarters about the viability of using them to perform damaging attacks. Others have argued that traditional malware prevention technologies will function for routers. In this paper we show how easily and effectively a home router can be repurposed to perform a mid-stream script injection attack. This attack transparently and indiscriminately siphons off many cases of user entered form-data from arbitrary (non-encrypted) Web-sites, including usernames and passwords. Additionally, the attack can take place over a long period of time affecting the user at a large number of sites allowing a userpsilas information to be easily correlated by one attacker. The script injection attack is performed through malware placed on an insecure home router, between the client and server. We implemented the attack on a commonly deployed home router to demonstrate its realizability and potential. Next, we propose and implement efficient countermeasures to discourage or prevent both our attack and other Web targeted script injection attacks. The countermeasures are a form of short-term tamper-prevention based on obfuscation and cryptographic hashing. It takes advantage of the fact that Web scripts are both delivered and interpreted on demand. Rather than preventing the possibility of attack altogether, they simply raise the cost of the attack to make it non-profitable thus removing the incentive to attack in the first place. These countermeasures are robust and practically deployable: they permit caching, are deployed server-side, but push most of the computational effort to the client. Further, the countermeasures do not require the modification of browsers or Internet standards. Further, they do not require cryptographic certificates or frequent expensive cryptographic operations, a stumbling block for the proper deployment of SSL on many Web-servers run by small to medium-sized businesses.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"80 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129801469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-01DOI: 10.1109/ECRIME.2008.4696972
Brad Wardman, Gary Warner
The timeliness of phishing incident response is hindered by the need for human verification of whether suspicious URLs are actually phishing sites. This paper presents a method for automating the determination, and demonstrates the effectiveness of this method in reducing the number of suspicious URLs that need human review through a method of comparing new URLs and their associated Web content with previously archived content of confirmed phishing sites. The results can be used to automate shutdown requests, to supplement traditional ldquoURL black listrdquo toolbars allowing blocking of previously unreported URLs, or to indicate dominant phishing site patterns which can be used to prioritize limited investigative resources.
{"title":"Automating phishing website identification through deep MD5 matching","authors":"Brad Wardman, Gary Warner","doi":"10.1109/ECRIME.2008.4696972","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696972","url":null,"abstract":"The timeliness of phishing incident response is hindered by the need for human verification of whether suspicious URLs are actually phishing sites. This paper presents a method for automating the determination, and demonstrates the effectiveness of this method in reducing the number of suspicious URLs that need human review through a method of comparing new URLs and their associated Web content with previously archived content of confirmed phishing sites. The results can be used to automate shutdown requests, to supplement traditional ldquoURL black listrdquo toolbars allowing blocking of previously unreported URLs, or to indicate dominant phishing site patterns which can be used to prioritize limited investigative resources.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127018223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-10DOI: 10.1109/ECRIME.2008.4696971
Christopher Soghoian
Researchers are increasingly turning to live, dasiain the wildpsila phishing studies of users, who unknowingly participate without giving informed consent. Such studies can expose researchers to a number of unique, and fairly significant legal risks. This paper will present four case studies highlighting the steps that researchers have taken to avoid legal problems, and to highlight the legal risks that they were unable to avoid. It then provides a high-level introduction to a few particularly dangerous areas of American law. Finally, it concludes with a series of best practices that may help researchers to avoid legal trouble, however, this information should not be taken as legal advice.
{"title":"Legal risks for phishing researchers","authors":"Christopher Soghoian","doi":"10.1109/ECRIME.2008.4696971","DOIUrl":"https://doi.org/10.1109/ECRIME.2008.4696971","url":null,"abstract":"Researchers are increasingly turning to live, dasiain the wildpsila phishing studies of users, who unknowingly participate without giving informed consent. Such studies can expose researchers to a number of unique, and fairly significant legal risks. This paper will present four case studies highlighting the steps that researchers have taken to avoid legal problems, and to highlight the legal risks that they were unable to avoid. It then provides a high-level introduction to a few particularly dangerous areas of American law. Finally, it concludes with a series of best practices that may help researchers to avoid legal trouble, however, this information should not be taken as legal advice.","PeriodicalId":170338,"journal":{"name":"2008 eCrime Researchers Summit","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134244398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}