An in-vehicle infotainment (IVI) system is connected to heterogeneous networks such as Controller Area Network bus, Bluetooth, Wi-Fi, cellular, and other vehicle-to-everything communications. An IVI system has control of a connected vehicle and deals with privacy-sensitive information like current geolocation and destination, phonebook, SMS, and driver's voice. Several offensive studies have been conducted on IVI systems of commercialized vehicles to show the feasibility of car hacking. However, to date, there has been no comprehensive analysis of the impact and implications of IVI system exploitations. To understand security and privacy concerns, we provide our experience hosting an IVI system hacking competition, Cyber Security Challenge 2021 (CSC2021). We use a feature-flavored infotainment operating system, Automotive Grade Linux (AGL). The participants gathered and submitted 33 reproducible and verified proofs-of-concept exploit codes targeting 11 components of the AGL-based IVI testbed. The participants exploited four vulnerabilities to steal various data, manipulate the IVI system, and cause a denial of service. The data leakage includes privacy, personally identifiable information, and cabin voice. The participants proved lateral movement to electronic control units and smartphones. We conclude with lessons learned with three mitigation strategies to enhance the security of the IVI system.
{"title":"Infotainment System Matters: Understanding the Impact and Implications of In-Vehicle Infotainment System Hacking with Automotive Grade Linux","authors":"S. Jeong, Minsoo Ryu, Hyunjae Kang, H. Kim","doi":"10.1145/3577923.3583650","DOIUrl":"https://doi.org/10.1145/3577923.3583650","url":null,"abstract":"An in-vehicle infotainment (IVI) system is connected to heterogeneous networks such as Controller Area Network bus, Bluetooth, Wi-Fi, cellular, and other vehicle-to-everything communications. An IVI system has control of a connected vehicle and deals with privacy-sensitive information like current geolocation and destination, phonebook, SMS, and driver's voice. Several offensive studies have been conducted on IVI systems of commercialized vehicles to show the feasibility of car hacking. However, to date, there has been no comprehensive analysis of the impact and implications of IVI system exploitations. To understand security and privacy concerns, we provide our experience hosting an IVI system hacking competition, Cyber Security Challenge 2021 (CSC2021). We use a feature-flavored infotainment operating system, Automotive Grade Linux (AGL). The participants gathered and submitted 33 reproducible and verified proofs-of-concept exploit codes targeting 11 components of the AGL-based IVI testbed. The participants exploited four vulnerabilities to steal various data, manipulate the IVI system, and cause a denial of service. The data leakage includes privacy, personally identifiable information, and cabin voice. The participants proved lateral movement to electronic control units and smartphones. We conclude with lessons learned with three mitigation strategies to enhance the security of the IVI system.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129686606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online trackers are invasive as they track our digital footprints, many of which are sensitive in nature, and when aggregated over time, they can help infer intricate details about our lifestyles and habits. Although much research has been conducted to understand the effectiveness of existing countermeasures for the desktop platform, little is known about how mobile browsers have evolved to handle online trackers. With mobile devices now generating more web traffic than their desktop counterparts, we fill this research gap through a large-scale comparative analysis of mobile web browsers. We crawl 10K valid websites from the Tranco list on real mobile devices. Our data collection process covers both popular generic browsers (e.g., Chrome, Firefox, and Safari) as well as privacy-focused browsers (e.g., Brave, Duck Duck Go, and Firefox-Focus). We use dynamic analysis of runtime execution traces and static analysis of source codes to highlight the tracking behavior of invasive fingerprinters. We also find evidence of tailored content being served to different browsers. In particular, we note that Firefox Focus sees altered script code, whereas Brave and Duck Duck Go have highly similar content. To test the privacy protection of browsers, we measure the responses of each browser in blocking trackers and advertisers and note the strengths and weaknesses of privacy browsers. To establish ground truth, we use well-known block lists, including EasyList, EasyPrivacy, Disconnect and WhoTracksMe and find that Brave generally blocks the highest number of content that should be blocked as per these lists. Focus performs better against social trackers, and Duck Duck Go restricts third-party trackers that perform email-based tracking.
{"title":"Comparative Privacy Analysis of Mobile Browsers","authors":"Ahsan Zafar, Anupam Das","doi":"10.1145/3577923.3583638","DOIUrl":"https://doi.org/10.1145/3577923.3583638","url":null,"abstract":"Online trackers are invasive as they track our digital footprints, many of which are sensitive in nature, and when aggregated over time, they can help infer intricate details about our lifestyles and habits. Although much research has been conducted to understand the effectiveness of existing countermeasures for the desktop platform, little is known about how mobile browsers have evolved to handle online trackers. With mobile devices now generating more web traffic than their desktop counterparts, we fill this research gap through a large-scale comparative analysis of mobile web browsers. We crawl 10K valid websites from the Tranco list on real mobile devices. Our data collection process covers both popular generic browsers (e.g., Chrome, Firefox, and Safari) as well as privacy-focused browsers (e.g., Brave, Duck Duck Go, and Firefox-Focus). We use dynamic analysis of runtime execution traces and static analysis of source codes to highlight the tracking behavior of invasive fingerprinters. We also find evidence of tailored content being served to different browsers. In particular, we note that Firefox Focus sees altered script code, whereas Brave and Duck Duck Go have highly similar content. To test the privacy protection of browsers, we measure the responses of each browser in blocking trackers and advertisers and note the strengths and weaknesses of privacy browsers. To establish ground truth, we use well-known block lists, including EasyList, EasyPrivacy, Disconnect and WhoTracksMe and find that Brave generally blocks the highest number of content that should be blocked as per these lists. Focus performs better against social trackers, and Duck Duck Go restricts third-party trackers that perform email-based tracking.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114340177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite long-ago predictions [1] that other user-authentication technologies would replace passwords, passwords remain pervasive and are likely to continue to be so [2]. This talk will describe our research on methods to tackle three key ingredients of account takeovers for password-protected accounts today: (i) site database breaches, which is the largest source of stolen passwords for internet sites; (ii) the tendency of users to reuse the same or similar passwords across sites; and (iii) credential stuffing, in which attackers submit breached credentials for one site in login attempts for the same accounts at another. A central theme of our research is that these factors are most effectively addressed by coordinating across sites, in contrast to today's practice of each site defending alone. We summarize algorithms to drive this coordination; the efficacy and security of our proposals; and the scalability of our designs through working implementations.
{"title":"Tackling Credential Abuse Together","authors":"M. Reiter","doi":"10.1145/3577923.3587262","DOIUrl":"https://doi.org/10.1145/3577923.3587262","url":null,"abstract":"Despite long-ago predictions [1] that other user-authentication technologies would replace passwords, passwords remain pervasive and are likely to continue to be so [2]. This talk will describe our research on methods to tackle three key ingredients of account takeovers for password-protected accounts today: (i) site database breaches, which is the largest source of stolen passwords for internet sites; (ii) the tendency of users to reuse the same or similar passwords across sites; and (iii) credential stuffing, in which attackers submit breached credentials for one site in login attempts for the same accounts at another. A central theme of our research is that these factors are most effectively addressed by coordinating across sites, in contrast to today's practice of each site defending alone. We summarize algorithms to drive this coordination; the efficacy and security of our proposals; and the scalability of our designs through working implementations.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114145146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called d-emphprivacy (also known as metric differential privacy ), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Then, I will discuss the issue of biased predictions in machine learning, and how DP can affect the level of fairness and accuracy of the trained model. Finally, I will show that the IBU can be applied also in this domain to ensure fairer treatment of disadvantaged groups and reconcile fairness and accuracy.
{"title":"Local Methods for Privacy Protection and Impact on Fairness","authors":"C. Palamidessi","doi":"10.1145/3577923.3587263","DOIUrl":"https://doi.org/10.1145/3577923.3587263","url":null,"abstract":"The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called d-emphprivacy (also known as metric differential privacy ), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Then, I will discuss the issue of biased predictions in machine learning, and how DP can affect the level of fairness and accuracy of the trained model. Finally, I will show that the IBU can be applied also in this domain to ensure fairer treatment of disadvantaged groups and reconcile fairness and accuracy.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116829851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In classical secure multi-party computation (SMPC) it is assumed that a fixed and a priori known set of parties wants to securely evaluate a function of their private inputs. This assumption implies that online problems, in which the set of parties that arrive and leave over time are not a priori known, are not covered by the classical setting. Therefore, the notion of online SMPC has been introduced, and a general feasibility result has been proven that shows that any online algorithm can be implemented as a distributed protocol that is secure in this setting [22, 23]. However, so far, no online SMPC protocol that implements a concrete online algorithm has been proposed and evaluated such that the practicality of the constructive proof is an open question. We close this gap and propose the first privacy-preserving online SMPC protocol for the prominent problem of fully online matching with deadlines. In this problem an (a priori unknown) set of parties with their inputs arrive over time and can then be matched with other parties until they leave when their individual deadline is reached. We prove that our protocol is statistically secure in the presence of a semi-honest adversary that controls strictly less than half of the parties present at each point in time. We extensively evaluate the performance of our protocol in three different network settings, various input sizes and different matching conditions, as well as various numbers of parties.
{"title":"Privacy-Preserving Fully Online Matching with Deadlines","authors":"Andreas Klinger, Ulrike Meyer","doi":"10.1145/3577923.3583654","DOIUrl":"https://doi.org/10.1145/3577923.3583654","url":null,"abstract":"In classical secure multi-party computation (SMPC) it is assumed that a fixed and a priori known set of parties wants to securely evaluate a function of their private inputs. This assumption implies that online problems, in which the set of parties that arrive and leave over time are not a priori known, are not covered by the classical setting. Therefore, the notion of online SMPC has been introduced, and a general feasibility result has been proven that shows that any online algorithm can be implemented as a distributed protocol that is secure in this setting [22, 23]. However, so far, no online SMPC protocol that implements a concrete online algorithm has been proposed and evaluated such that the practicality of the constructive proof is an open question. We close this gap and propose the first privacy-preserving online SMPC protocol for the prominent problem of fully online matching with deadlines. In this problem an (a priori unknown) set of parties with their inputs arrive over time and can then be matched with other parties until they leave when their individual deadline is reached. We prove that our protocol is statistically secure in the presence of a semi-honest adversary that controls strictly less than half of the parties present at each point in time. We extensively evaluate the performance of our protocol in three different network settings, various input sizes and different matching conditions, as well as various numbers of parties.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"265 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121967314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Talaya Farasat, Muhammad Ahmad Rathore, JongWon Kim
Kubernetes, a container orchestration tool, can be vulnerable to many network threats. Distributed Denial-of-Service (DDoS) attack causes Kubernetes nodes and Pods/Containers inaccessible to users. In this work, we highlight that extended Berkeley Packet Filter/eXpress Data Path (eBPF/XDP) can protect Kubernetes Weave Net Pods from DDoS attacks by loading the XDP_DROP/FILTER program over the Weave Net VXLAN interface.
Kubernetes是一种容器编排工具,容易受到许多网络威胁的攻击。分布式拒绝服务(DDoS)攻击导致用户无法访问Kubernetes节点和pod / container。在这项工作中,我们强调扩展伯克利包过滤/快速数据路径(eBPF/XDP)可以通过在Weave Net VXLAN接口上加载XDP_DROP/ Filter程序来保护Kubernetes Weave Net Pods免受DDoS攻击。
{"title":"Securing Kubernetes Pods communicating over Weave Net through eBPF/XDP from DDoS attacks","authors":"Talaya Farasat, Muhammad Ahmad Rathore, JongWon Kim","doi":"10.1145/3577923.3585049","DOIUrl":"https://doi.org/10.1145/3577923.3585049","url":null,"abstract":"Kubernetes, a container orchestration tool, can be vulnerable to many network threats. Distributed Denial-of-Service (DDoS) attack causes Kubernetes nodes and Pods/Containers inaccessible to users. In this work, we highlight that extended Berkeley Packet Filter/eXpress Data Path (eBPF/XDP) can protect Kubernetes Weave Net Pods from DDoS attacks by loading the XDP_DROP/FILTER program over the Weave Net VXLAN interface.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132510730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Surminski, Christian Niesler, Sebastian Linsner, Lucas Davi, Christian A. Reuter
From the perspective of end-users, IoT devices behave like a black box: As long as they work as intended, users will not detect any compromise. Users have minimal control over the software. Hence, it is very likely that the user misses that illegal recordings and transmissions occur if a security camera or a smart speaker is hacked. In this paper, we present SCAtt-man, the first remote attestation scheme that is specifically designed with the user in mind. SCAtt-man deploys software-based attestation to check the integrity of remote devices, allowing users to verify the integrity of IoT devices with their smartphones. The key novelty of SCAtt-man resides in the utilization of user-observable side-channels such as light or sound in the attestation protocol. Our proof-of-concept implementation targets a smart speaker and an attestation protocol that is based on a data-over-sound protocol. Our evaluation demonstrates the effectiveness of toolname against a variety of attacks and its usability based on a user study with 20 participants.
{"title":"SCAtt-man: Side-Channel-Based Remote Attestation for Embedded Devices that Users Understand","authors":"Sebastian Surminski, Christian Niesler, Sebastian Linsner, Lucas Davi, Christian A. Reuter","doi":"10.1145/3577923.3583652","DOIUrl":"https://doi.org/10.1145/3577923.3583652","url":null,"abstract":"From the perspective of end-users, IoT devices behave like a black box: As long as they work as intended, users will not detect any compromise. Users have minimal control over the software. Hence, it is very likely that the user misses that illegal recordings and transmissions occur if a security camera or a smart speaker is hacked. In this paper, we present SCAtt-man, the first remote attestation scheme that is specifically designed with the user in mind. SCAtt-man deploys software-based attestation to check the integrity of remote devices, allowing users to verify the integrity of IoT devices with their smartphones. The key novelty of SCAtt-man resides in the utilization of user-observable side-channels such as light or sound in the attestation protocol. Our proof-of-concept implementation targets a smart speaker and an attestation protocol that is based on a data-over-sound protocol. Our evaluation demonstrates the effectiveness of toolname against a variety of attacks and its usability based on a user study with 20 participants.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121959926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vincent Unsel, Stephan Wiefling, Nils Gruschka, L. Lo Iacono
Online services have difficulties to replace passwords with more secure user authentication mechanisms, such as Two-Factor Authentication (2FA). This is partly due to the fact that users tend to reject such mechanisms in use cases outside of online banking. Relying on password authentication alone, however, is not an option in light of recent attack patterns such as credential stuffing. Risk-Based Authentication (RBA) can serve as an interim solution to increase password-based account security until better methods are in place. Unfortunately, RBA is currently used by only a few major online services, even though it is recommended by various standards and has been shown to be effective in scientific studies. This paper contributes to the hypothesis that the low adoption of RBA in practice can be due to the complexity of implementing it. We provide an RBA implementation for the open source cloud management software OpenStack, which is the first fully functional open source RBA implementation based on the Freeman et al. algorithm, along with initial reference tests that can serve as a guiding example and blueprint for developers.
{"title":"Risk-Based Authentication for OpenStack: A Fully Functional Implementation and Guiding Example","authors":"Vincent Unsel, Stephan Wiefling, Nils Gruschka, L. Lo Iacono","doi":"10.1145/3577923.3583634","DOIUrl":"https://doi.org/10.1145/3577923.3583634","url":null,"abstract":"Online services have difficulties to replace passwords with more secure user authentication mechanisms, such as Two-Factor Authentication (2FA). This is partly due to the fact that users tend to reject such mechanisms in use cases outside of online banking. Relying on password authentication alone, however, is not an option in light of recent attack patterns such as credential stuffing. Risk-Based Authentication (RBA) can serve as an interim solution to increase password-based account security until better methods are in place. Unfortunately, RBA is currently used by only a few major online services, even though it is recommended by various standards and has been shown to be effective in scientific studies. This paper contributes to the hypothesis that the low adoption of RBA in practice can be due to the complexity of implementing it. We provide an RBA implementation for the open source cloud management software OpenStack, which is the first fully functional open source RBA implementation based on the Freeman et al. algorithm, along with initial reference tests that can serve as a guiding example and blueprint for developers.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"1148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120878622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. L. Rahman, Daniel Timko, H. Wali, Ajaya Neupane
Text phish messages, referred to as Smishing (SMS + phishing) is a type of social engineering attack where fake text messages are created, and used to lure users into responding to those messages. These messages aim to obtain user credentials, install malware on the phones, or launch smishing attacks. They ask users to reply to their message, click on a URL that redirects them to a phishing website, or call the provided number. Drawing inspiration by the works of Tu et al. on Robocalls and Tischer et al. on USB drives, this paper investigates why smishing works. Accordingly, we designed smishing experiments and sent phishing SMSes to 265 users to measure the efficacy of smishing attacks. We sent eight fake text messages to participants and recorded their CLICK, REPLY, and CALL responses along with their feedback in a post-test survey. Our results reveal that 16.92% of our participants had potentially fallen for our smishing attack. To test repeat phishing, we subjected a set of randomly selected participants to a second round of smishing attacks with a different message than the one they received in the first round. As a result, we observed that 12.82% potentially fell for the attack again. Using logistic regression, we observed that a combination of user REPLY and CLICK actions increased the odds that a user would respond to our smishing message when compared to CLICK. Additionally, we found a similar statistically significant increase when comparing Facebook and Walmart entity scenario to our IRS baseline. Based on our results, we pinpoint essentially message attributes and demographic features that contribute to a statistically significant change in the response rates to smishing attacks.
{"title":"Users Really Do Respond To Smishing","authors":"M. L. Rahman, Daniel Timko, H. Wali, Ajaya Neupane","doi":"10.1145/3577923.3583640","DOIUrl":"https://doi.org/10.1145/3577923.3583640","url":null,"abstract":"Text phish messages, referred to as Smishing (SMS + phishing) is a type of social engineering attack where fake text messages are created, and used to lure users into responding to those messages. These messages aim to obtain user credentials, install malware on the phones, or launch smishing attacks. They ask users to reply to their message, click on a URL that redirects them to a phishing website, or call the provided number. Drawing inspiration by the works of Tu et al. on Robocalls and Tischer et al. on USB drives, this paper investigates why smishing works. Accordingly, we designed smishing experiments and sent phishing SMSes to 265 users to measure the efficacy of smishing attacks. We sent eight fake text messages to participants and recorded their CLICK, REPLY, and CALL responses along with their feedback in a post-test survey. Our results reveal that 16.92% of our participants had potentially fallen for our smishing attack. To test repeat phishing, we subjected a set of randomly selected participants to a second round of smishing attacks with a different message than the one they received in the first round. As a result, we observed that 12.82% potentially fell for the attack again. Using logistic regression, we observed that a combination of user REPLY and CLICK actions increased the odds that a user would respond to our smishing message when compared to CLICK. Additionally, we found a similar statistically significant increase when comparing Facebook and Walmart entity scenario to our IRS baseline. Based on our results, we pinpoint essentially message attributes and demographic features that contribute to a statistically significant change in the response rates to smishing attacks.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132363562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pier Paolo Tricomi, Lisa Facciolo, Giovanni Apruzzese, M. Conti
Did you know that over 70 million of Dota2 players have their in-game data freely accessible? What if such data is used in malicious ways? This paper is the first to investigate such a problem. Motivated by the widespread popularity of video games, we propose the first threat model for Attribute Inference Attacks (AIA) in the Dota2 context. We explain how (and why) attackers can exploit the abundant public data in the Dota2 ecosystem to infer private information about its players. Due to lack of concrete evidence on the efficacy of our AIA, we empirically prove and assess their impact in reality. By conducting an extensive survey on 500 Dota2 players spanning over 26k matches, we verify whether a correlation exists between a player's Dota2 activity and their real-life. Then, after finding such a link (p < 0.01 and ρ > 0.3), we ethically perform diverse AIA. We leverage the capabilities of machine learning to infer real-life attributes of the respondents of our survey by using their publicly available in-game data. Our results show that, by applyingdomain expertise, some AIA can reach up to 98% precision and over 90% accuracy. This paper hence raises the alarm on a subtle, but concrete threat that can potentially affect the entire competitive gaming landscape. We alerted the developers of Dota2.
{"title":"Attribute Inference Attacks in Online Multiplayer Video Games: A Case Study on DOTA2","authors":"Pier Paolo Tricomi, Lisa Facciolo, Giovanni Apruzzese, M. Conti","doi":"10.1145/3577923.3583653","DOIUrl":"https://doi.org/10.1145/3577923.3583653","url":null,"abstract":"Did you know that over 70 million of Dota2 players have their in-game data freely accessible? What if such data is used in malicious ways? This paper is the first to investigate such a problem. Motivated by the widespread popularity of video games, we propose the first threat model for Attribute Inference Attacks (AIA) in the Dota2 context. We explain how (and why) attackers can exploit the abundant public data in the Dota2 ecosystem to infer private information about its players. Due to lack of concrete evidence on the efficacy of our AIA, we empirically prove and assess their impact in reality. By conducting an extensive survey on 500 Dota2 players spanning over 26k matches, we verify whether a correlation exists between a player's Dota2 activity and their real-life. Then, after finding such a link (p < 0.01 and ρ > 0.3), we ethically perform diverse AIA. We leverage the capabilities of machine learning to infer real-life attributes of the respondents of our survey by using their publicly available in-game data. Our results show that, by applyingdomain expertise, some AIA can reach up to 98% precision and over 90% accuracy. This paper hence raises the alarm on a subtle, but concrete threat that can potentially affect the entire competitive gaming landscape. We alerted the developers of Dota2.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114402173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}