首页 > 最新文献

Proceedings of the ACM Internet Measurement Conference最新文献

英文 中文
FlashRoute FlashRoute
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423619
Yuchen Huang, M. Rabinovich, R. Al-Dalky
We propose a new traceroute tool, FlashRoute for efficient large-scale topology discovery. FlashRoute reduces the time required for tracerouting the entire /24 IPv4 address space by a factor of three and half compared to previous state of the art. Additionally, we present a new technique to measure hop-distance to a destination using a single probe and uncover a bias of the influential ISI Census hitlist [18] in topology discovery.
{"title":"FlashRoute","authors":"Yuchen Huang, M. Rabinovich, R. Al-Dalky","doi":"10.1145/3419394.3423619","DOIUrl":"https://doi.org/10.1145/3419394.3423619","url":null,"abstract":"We propose a new traceroute tool, FlashRoute for efficient large-scale topology discovery. FlashRoute reduces the time required for tracerouting the entire /24 IPv4 address space by a factor of three and half compared to previous state of the art. Additionally, we present a new technique to measure hop-distance to a destination using a single probe and uncover a bias of the influential ISI Census hitlist [18] in topology discovery.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121133199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Potential for Discrimination via Composition 论构成歧视的可能性
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423641
Giridhari Venkatadri, A. Mislove
The success of platforms such as Facebook and Google has been due in no small part to features that allow advertisers to target ads in a fine-grained manner. However, these features open up the potential for discriminatory advertising when advertisers include or exclude users of protected classes---either directly or indirectly---in a discriminatory fashion. Despite the fact that advertisers are able to compose various targeting features together, the existing mitigations to discriminatory targeting have focused only on individual features; there are concerns that such composition could result in targeting that is more discriminatory than the features individually. In this paper, we first demonstrate how compositions of individual targeting features can yield discriminatory ad targeting even for Facebook's restricted targeting features for ads in special categories (meant to protect against discriminatory advertising). We then conduct the first study of the potential for discrimination that spans across three major advertising platforms (Facebook, Google, and LinkedIn), showing how the potential for discriminatory advertising is pervasive across these platforms. Our work further points to the need for more careful mitigations to address the issue of discriminatory ad targeting.
Facebook和谷歌等平台的成功,在很大程度上要归功于允许广告商以精细粒度的方式定位广告的功能。然而,当广告商以歧视性的方式直接或间接地包括或排除受保护阶层的用户时,这些特征为歧视性广告提供了可能。尽管广告商能够将各种目标特征组合在一起,但现有的针对歧视性目标的缓解措施仅侧重于单个特征;有人担心,这种组合可能导致比个别特征更具歧视性的目标。在本文中,我们首先展示了个人定位功能的组合如何产生歧视性广告定位,即使Facebook对特殊类别广告的限制定位功能(旨在防止歧视性广告)也是如此。然后,我们对三个主要广告平台(Facebook、Google和LinkedIn)的潜在歧视进行了首次研究,显示了这些平台上潜在的歧视性广告是如何普遍存在的。我们的工作进一步表明,需要采取更谨慎的缓解措施,以解决歧视性广告定位问题。
{"title":"On the Potential for Discrimination via Composition","authors":"Giridhari Venkatadri, A. Mislove","doi":"10.1145/3419394.3423641","DOIUrl":"https://doi.org/10.1145/3419394.3423641","url":null,"abstract":"The success of platforms such as Facebook and Google has been due in no small part to features that allow advertisers to target ads in a fine-grained manner. However, these features open up the potential for discriminatory advertising when advertisers include or exclude users of protected classes---either directly or indirectly---in a discriminatory fashion. Despite the fact that advertisers are able to compose various targeting features together, the existing mitigations to discriminatory targeting have focused only on individual features; there are concerns that such composition could result in targeting that is more discriminatory than the features individually. In this paper, we first demonstrate how compositions of individual targeting features can yield discriminatory ad targeting even for Facebook's restricted targeting features for ads in special categories (meant to protect against discriminatory advertising). We then conduct the first study of the potential for discrimination that spans across three major advertising platforms (Facebook, Google, and LinkedIn), showing how the potential for discriminatory advertising is pervasive across these platforms. Our work further points to the need for more careful mitigations to address the issue of discriminatory ad targeting.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115049061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
TopoScope 局部检查仪
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423627
Zitong Jin, Xingang Shi, Yan Yang, Xia Yin, Zhiliang Wang, Jianping Wu
Knowledge of the Internet topology and the business relationships between Autonomous Systems (ASes) is the basis for studying many aspects of the Internet. Despite the significant progress achieved by latest inference algorithms, their inference results still suffer from errors on some critical links due to limited data, thus hindering many applications that rely on the inferred relationships. We take an in-depth analysis on the challenges inherent in the data, especially the limited coverage and biased concentration of the vantage points (VPs). Some aspects of them have been largely overlooked but will become more exacerbated when the Internet further grows. Then we develop TopoScope, a framework for accurately recovering AS relationships from such fragmentary observations. TopoScope uses ensemble learning and Bayesian Network to mitigate the observation bias originating not only from a single VP, but also from the uneven distribution of available VPs. It also discovers the intrinsic similarities between groups of adjacent links, and infers the relationships on hidden links that are not directly observable. Compared to state-of-the-art inference algorithms, TopoScope reduces the inference error by up to 2.7-4 times, discovers the relationships for around 30,000 upper layer hidden AS links, and is still more accurate and stable under more incomplete or biased observations.
{"title":"TopoScope","authors":"Zitong Jin, Xingang Shi, Yan Yang, Xia Yin, Zhiliang Wang, Jianping Wu","doi":"10.1145/3419394.3423627","DOIUrl":"https://doi.org/10.1145/3419394.3423627","url":null,"abstract":"Knowledge of the Internet topology and the business relationships between Autonomous Systems (ASes) is the basis for studying many aspects of the Internet. Despite the significant progress achieved by latest inference algorithms, their inference results still suffer from errors on some critical links due to limited data, thus hindering many applications that rely on the inferred relationships. We take an in-depth analysis on the challenges inherent in the data, especially the limited coverage and biased concentration of the vantage points (VPs). Some aspects of them have been largely overlooked but will become more exacerbated when the Internet further grows. Then we develop TopoScope, a framework for accurately recovering AS relationships from such fragmentary observations. TopoScope uses ensemble learning and Bayesian Network to mitigate the observation bias originating not only from a single VP, but also from the uneven distribution of available VPs. It also discovers the intrinsic similarities between groups of adjacent links, and infers the relationships on hidden links that are not directly observable. Compared to state-of-the-art inference algorithms, TopoScope reduces the inference error by up to 2.7-4 times, discovers the relationships for around 30,000 upper layer hidden AS links, and is still more accurate and stable under more incomplete or biased observations.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114482559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
How China Detects and Blocks Shadowsocks 中国如何检测和阻止Shadowsocks
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423644
Alice, Bob, Carol, Jan Beznazwy, A. Houmansadr
Shadowsocks is one of the most popular circumvention tools in China. Since May 2019, there have been numerous anecdotal reports of the blocking of Shadowsocks from Chinese users. In this study, we reveal how the Great Firewall of China (GFW) detects and blocks Shadowsocks and its variants. Using measurement experiments, we find that the GFW uses the length and entropy of the first data packet in each connection to identify probable Shadowsocks traffic, then sends seven different types of active probes, in different stages, to the corresponding servers to test whether its guess is correct. We developed a prober simulator to analyze the effect of different types of probes on various Shadowsocks implementations, and used it to infer what vulnerabilities are exploited by the censor. We fingerprinted the probers and found differences relative to previous work on active probing. A network-level side channel reveals that the probers, which use thousands of IP addresses, are likely controlled by a set of centralized structures. Based on our gained understanding, we present a temporary workaround that successfully mitigates the traffic analysis attack by the GFW. We further discuss essential strategies to defend against active probing. We responsibly disclosed our findings and suggestions to Shadowsocks developers, which has led to more censorship-resistant tools.
Shadowsocks是中国最流行的翻墙工具之一。自2019年5月以来,有许多关于Shadowsocks被中国用户屏蔽的轶事报道。在本研究中,我们揭示了中国防火墙(GFW)如何检测和阻止Shadowsocks及其变体。通过测量实验,我们发现GFW利用每个连接中第一个数据包的长度和熵来识别可能的Shadowsocks流量,然后在不同阶段向相应的服务器发送七种不同类型的主动探针,以测试其猜测是否正确。我们开发了一个探针模拟器来分析不同类型的探针对各种Shadowsocks实现的影响,并使用它来推断审查器利用了哪些漏洞。我们对探针进行了指纹识别,发现了与之前在主动探测方面的工作不同之处。网络级侧通道显示,使用数千个IP地址的探测器可能由一组集中结构控制。根据我们获得的理解,我们提出了一个临时解决方案,成功地减轻了GFW的流量分析攻击。我们进一步讨论了防御主动探测的基本策略。我们负责任地向Shadowsocks开发者披露了我们的发现和建议,这导致了更多的抗审查工具。
{"title":"How China Detects and Blocks Shadowsocks","authors":"Alice, Bob, Carol, Jan Beznazwy, A. Houmansadr","doi":"10.1145/3419394.3423644","DOIUrl":"https://doi.org/10.1145/3419394.3423644","url":null,"abstract":"Shadowsocks is one of the most popular circumvention tools in China. Since May 2019, there have been numerous anecdotal reports of the blocking of Shadowsocks from Chinese users. In this study, we reveal how the Great Firewall of China (GFW) detects and blocks Shadowsocks and its variants. Using measurement experiments, we find that the GFW uses the length and entropy of the first data packet in each connection to identify probable Shadowsocks traffic, then sends seven different types of active probes, in different stages, to the corresponding servers to test whether its guess is correct. We developed a prober simulator to analyze the effect of different types of probes on various Shadowsocks implementations, and used it to infer what vulnerabilities are exploited by the censor. We fingerprinted the probers and found differences relative to previous work on active probing. A network-level side channel reveals that the probers, which use thousands of IP addresses, are likely controlled by a set of centralized structures. Based on our gained understanding, we present a temporary workaround that successfully mitigates the traffic analysis attack by the GFW. We further discuss essential strategies to defend against active probing. We responsibly disclosed our findings and suggestions to Shadowsocks developers, which has led to more censorship-resistant tools.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130647613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Measuring the Emergence of Consent Management on the Web 衡量同意管理在网络上的出现
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423647
M. Hils, Daniel W. Woods, Rainer Böhme
Privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have pushed internet firms processing personal data to obtain user consent. Uncertainty around sanctions for non-compliance led many websites to embed a Consent Management Provider (CMP), which collects users' consent and shares it with third-party vendors and other websites. Our paper maps the formation of this ecosystem using longitudinal measurements. Primary and secondary data sources are used to measure each actor within the ecosystem. Using 161 million browser crawls, we estimate that CMP adoption doubled from June 2018 to June 2019 and then doubled again until June 2020. Sampling 4.2 million unique domains, we observe that CMP adoption is most prevalent among moderately popular websites (Tranco top 50-10k) but a long tail exists. Using APIs from the ad-tech industry, we quantify the purposes and lawful bases used to justify processing personal data. A controlled experiment on a public website provides novel insights into how the time-to-complete of two leading CMPs' consent dialogues varies with the preferences expressed, showing how privacy aware users incur a significant time cost.
《通用数据保护条例》(GDPR)和《加州消费者隐私法》(CCPA)等隐私法促使互联网公司在处理个人数据时获得用户同意。不确定性导致许多网站嵌入了一个同意管理提供商(CMP),它收集用户的同意,并与第三方供应商和其他网站共享。我们的论文利用纵向测量绘制了这个生态系统的形成图。主要和次要数据源用于测量生态系统中的每个参与者。使用1.61亿次浏览器抓取,我们估计从2018年6月到2019年6月,CMP的采用率翻了一番,然后再翻一番,直到2020年6月。通过对420万个独立域名进行抽样,我们观察到CMP的采用在中等受欢迎的网站(Tranco排名前50-10万)中最为普遍,但存在长尾。使用来自广告技术行业的api,我们量化了处理个人数据的目的和合法依据。在一个公共网站上进行的一项对照实验提供了新的见解,揭示了两个领先的cmp同意对话的完成时间如何随着所表达的偏好而变化,显示了隐私意识的用户如何产生重大的时间成本。
{"title":"Measuring the Emergence of Consent Management on the Web","authors":"M. Hils, Daniel W. Woods, Rainer Böhme","doi":"10.1145/3419394.3423647","DOIUrl":"https://doi.org/10.1145/3419394.3423647","url":null,"abstract":"Privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have pushed internet firms processing personal data to obtain user consent. Uncertainty around sanctions for non-compliance led many websites to embed a Consent Management Provider (CMP), which collects users' consent and shares it with third-party vendors and other websites. Our paper maps the formation of this ecosystem using longitudinal measurements. Primary and secondary data sources are used to measure each actor within the ecosystem. Using 161 million browser crawls, we estimate that CMP adoption doubled from June 2018 to June 2019 and then doubled again until June 2020. Sampling 4.2 million unique domains, we observe that CMP adoption is most prevalent among moderately popular websites (Tranco top 50-10k) but a long tail exists. Using APIs from the ad-tech industry, we quantify the purposes and lawful bases used to justify processing personal data. A controlled experiment on a public website provides novel insights into how the time-to-complete of two leading CMPs' consent dialogues varies with the preferences expressed, showing how privacy aware users incur a significant time cost.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130651393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Investigating Large Scale HTTPS Interception in Kazakhstan 在哈萨克斯坦调查大规模HTTPS拦截
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423665
R. Raman, Leonid Evdokimov, Eric Wustrow, J. A. Halderman, Roya Ensafi
Increased adoption of HTTPS has created a largely encrypted web, but these security gains are on a collision course with governments that desire visibility into and control over user communications. Last year, the government of Kazakhstan conducted an unprecedented large-scale HTTPS interception attack by forcing users to trust a custom root certificate. We were able to detect the interception and monitor its scale and evolution using measurements from in-country vantage points and remote measurement techniques. We find that the attack targeted connections to 37 unique domains, with a focus on social media and communication services, suggesting a surveillance motive, and that it affected a large fraction of connections passing through the country's largest ISP, Kazakhtelecom. Our continuous real-time measurements indicated that the interception system was shut down after being intermittently active for 21 days. Subsequently, supported by our findings, two major browsers (Mozilla Firefox and Google Chrome) completely blocked the use of Kazakhstan's custom root. However, the incident sets a dangerous precedent, not only for Kazakhstan but for other countries that may seek to circumvent encryption online.
越来越多地采用HTTPS创建了一个很大程度上加密的网络,但这些安全收益与希望看到和控制用户通信的政府发生了冲突。去年,哈萨克斯坦政府通过强制用户信任自定义根证书,进行了前所未有的大规模HTTPS拦截攻击。我们能够利用国内有利位置和远程测量技术检测拦截并监测其规模和演变。我们发现攻击的目标是连接到37个独特的域名,重点是社交媒体和通信服务,这表明有监视动机,并且它影响了通过该国最大的ISP哈萨克斯坦电信的大部分连接。我们的连续实时测量表明,拦截系统在间歇性活动21天后被关闭。随后,根据我们的发现,两个主要浏览器(Mozilla Firefox和Google Chrome)完全阻止了哈萨克斯坦自定义根的使用。然而,这一事件开创了一个危险的先例,不仅对哈萨克斯坦如此,对其他可能寻求绕过网络加密的国家也是如此。
{"title":"Investigating Large Scale HTTPS Interception in Kazakhstan","authors":"R. Raman, Leonid Evdokimov, Eric Wustrow, J. A. Halderman, Roya Ensafi","doi":"10.1145/3419394.3423665","DOIUrl":"https://doi.org/10.1145/3419394.3423665","url":null,"abstract":"Increased adoption of HTTPS has created a largely encrypted web, but these security gains are on a collision course with governments that desire visibility into and control over user communications. Last year, the government of Kazakhstan conducted an unprecedented large-scale HTTPS interception attack by forcing users to trust a custom root certificate. We were able to detect the interception and monitor its scale and evolution using measurements from in-country vantage points and remote measurement techniques. We find that the attack targeted connections to 37 unique domains, with a focus on social media and communication services, suggesting a surveillance motive, and that it affected a large fraction of connections passing through the country's largest ISP, Kazakhtelecom. Our continuous real-time measurements indicated that the interception system was shut down after being intermittently active for 21 days. Subsequently, supported by our findings, two major browsers (Mozilla Firefox and Google Chrome) completely blocked the use of Kazakhstan's custom root. However, the incident sets a dangerous precedent, not only for Kazakhstan but for other countries that may seek to circumvent encryption online.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124038627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
The Reality of Algorithm Agility: Studying the DNSSEC Algorithm Life-Cycle 算法敏捷性的现实:DNSSEC算法生命周期研究
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423638
M. Müller, W. Toorop, Taejoong Chung, J. Jansen, R. V. Rijswijk-Deij
The DNS Security Extensions (DNSSEC) add data origin authentication and data integrity to the Domain Name System (DNS), the naming system of the Internet. With DNSSEC, signatures are added to the information provided in the DNS using public key cryptography. Advances in both cryptography and cryptanalysis make it necessary to deploy new algorithms in DNSSEC, as well as deprecate those with weakened security. If this process is easy, then the protocol has achieved what the IETF terms "algorithm agility". In this paper, we study the lifetime of algorithms for DNSSEC. This includes: (i) standardizing the algorithm, (ii) implementing support in DNS software, (iii) deploying new algorithms at domains and recursive resolvers, and (iv) replacing deprecated algorithms. Using data from more than 6.7 million signed domains and over 10,000 vantage points in the DNS, combined with qualitative studies, we show that DNSSEC has only partially achieved algorithm agility. Standardizing new algorithms and deprecating insecure ones can take years. We highlight the main barriers for getting new algorithms deployed, but also discuss success factors. This study provides key insights to take into account when new algorithms are introduced, for example when the Internet must transition to quantum-safe public key cryptography.
DNSSEC (DNS Security Extensions)为互联网的命名系统DNS (Domain Name System)增加了数据源认证和数据完整性。对于DNSSEC,使用公钥加密技术将签名添加到DNS中提供的信息中。密码学和密码分析的进步使得有必要在DNSSEC中部署新的算法,并弃用那些安全性较弱的算法。如果这个过程很容易,那么这个协议就达到了IETF所说的“算法敏捷性”。本文研究了DNSSEC算法的生存期。这包括:(i)使算法标准化,(ii)在DNS软件中实现支持,(iii)在域和递归解析器中部署新算法,以及(iv)替换已弃用的算法。使用来自超过670万个签名域和DNS中超过10,000个有利位置的数据,结合定性研究,我们表明DNSSEC仅部分实现了算法敏捷性。新算法的标准化和不安全算法的弃用可能需要数年时间。我们强调了部署新算法的主要障碍,但也讨论了成功因素。这项研究为引入新算法提供了关键的见解,例如当互联网必须过渡到量子安全的公钥加密时。
{"title":"The Reality of Algorithm Agility: Studying the DNSSEC Algorithm Life-Cycle","authors":"M. Müller, W. Toorop, Taejoong Chung, J. Jansen, R. V. Rijswijk-Deij","doi":"10.1145/3419394.3423638","DOIUrl":"https://doi.org/10.1145/3419394.3423638","url":null,"abstract":"The DNS Security Extensions (DNSSEC) add data origin authentication and data integrity to the Domain Name System (DNS), the naming system of the Internet. With DNSSEC, signatures are added to the information provided in the DNS using public key cryptography. Advances in both cryptography and cryptanalysis make it necessary to deploy new algorithms in DNSSEC, as well as deprecate those with weakened security. If this process is easy, then the protocol has achieved what the IETF terms \"algorithm agility\". In this paper, we study the lifetime of algorithms for DNSSEC. This includes: (i) standardizing the algorithm, (ii) implementing support in DNS software, (iii) deploying new algorithms at domains and recursive resolvers, and (iv) replacing deprecated algorithms. Using data from more than 6.7 million signed domains and over 10,000 vantage points in the DNS, combined with qualitative studies, we show that DNSSEC has only partially achieved algorithm agility. Standardizing new algorithms and deprecating insecure ones can take years. We highlight the main barriers for getting new algorithms deployed, but also discuss success factors. This study provides key insights to take into account when new algorithms are introduced, for example when the Internet must transition to quantum-safe public key cryptography.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133503283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
MAnycast2
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423646
Raffaele Sommese, L. Bertholdo, Gautam Akiwate, M. Jonker, Roland van Rijswijk-Deij, A. Dainotti, K. Claffy, A. Sperotto
Anycast addressing - assigning the same IP address to multiple, distributed devices - has become a fundamental approach to improving the resilience and performance of Internet services, but its conventional deployment model makes it impossible to infer from the address itself that it is anycast. Existing methods to detect anycast IPv4 prefixes present accuracy challenges stemming from routing and latency dynamics, and efficiency and scalability challenges related to measurement load. We review these challenges and introduce a new technique we call "MAnycast2" that can help overcome them. Our technique uses a distributed measurement platform of anycast vantage points as sources to probe potential anycast destinations. This approach eliminates any sensitivity to latency dynamics, and greatly improves efficiency and scalability. We discuss alternatives to overcome remaining challenges relating to routing dynamics, suggesting a path toward establishing the capability to complete, in under 3 hours, a full census of which IPv4 prefixes in the ISI hitlist are anycast.
{"title":"MAnycast2","authors":"Raffaele Sommese, L. Bertholdo, Gautam Akiwate, M. Jonker, Roland van Rijswijk-Deij, A. Dainotti, K. Claffy, A. Sperotto","doi":"10.1145/3419394.3423646","DOIUrl":"https://doi.org/10.1145/3419394.3423646","url":null,"abstract":"Anycast addressing - assigning the same IP address to multiple, distributed devices - has become a fundamental approach to improving the resilience and performance of Internet services, but its conventional deployment model makes it impossible to infer from the address itself that it is anycast. Existing methods to detect anycast IPv4 prefixes present accuracy challenges stemming from routing and latency dynamics, and efficiency and scalability challenges related to measurement load. We review these challenges and introduce a new technique we call \"MAnycast2\" that can help overcome them. Our technique uses a distributed measurement platform of anycast vantage points as sources to probe potential anycast destinations. This approach eliminates any sensitivity to latency dynamics, and greatly improves efficiency and scalability. We discuss alternatives to overcome remaining challenges relating to routing dynamics, suggesting a path toward establishing the capability to complete, in under 3 hours, a full census of which IPv4 prefixes in the ISI hitlist are anycast.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122071105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
On the Origin of Scanning: The Impact of Location on Internet-Wide Scans 扫描的起源:地理位置对全互联网扫描的影响
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3424214
Gerry Wan, Liz Izhikevich, David Adrian, K. Yoshioka, Ralph Holz, C. Rossow, Z. Durumeric
Fast IPv4 scanning has enabled researchers to answer a wealth of security and networking questions. Yet, despite widespread use, there has been little validation of the methodology's accuracy, including whether a single scan provides sufficient coverage. In this paper, we analyze how scan origin affects the results of Internet-wide scans by completing three HTTP, HTTPS, and SSH scans from seven geographically and topologically diverse networks. We find that individual origins miss an average 1.6-8.4% of HTTP, 1.5-4.6% of HTTPS, and 8.3-18.2% of SSH hosts. We analyze why origins see different hosts, and show how permanent and temporary blocking, packet loss, geographic biases, and transient outages affect scan results. We discuss the implications for scanning and provide recommendations for future studies.
快速IPv4扫描使研究人员能够回答大量的安全和网络问题。然而,尽管该方法被广泛使用,但其准确性几乎没有得到验证,包括一次扫描是否能提供足够的覆盖范围。在本文中,我们通过从七个地理和拓扑不同的网络中完成三个HTTP, HTTPS和SSH扫描来分析扫描源如何影响互联网范围扫描的结果。我们发现单个源平均遗漏了1.6-8.4%的HTTP, 1.5-4.6%的HTTPS和8.3-18.2%的SSH主机。我们分析了为什么起源看到不同的主机,并展示了永久和临时阻塞、数据包丢失、地理偏差和短暂中断如何影响扫描结果。我们讨论了扫描的意义,并对未来的研究提出了建议。
{"title":"On the Origin of Scanning: The Impact of Location on Internet-Wide Scans","authors":"Gerry Wan, Liz Izhikevich, David Adrian, K. Yoshioka, Ralph Holz, C. Rossow, Z. Durumeric","doi":"10.1145/3419394.3424214","DOIUrl":"https://doi.org/10.1145/3419394.3424214","url":null,"abstract":"Fast IPv4 scanning has enabled researchers to answer a wealth of security and networking questions. Yet, despite widespread use, there has been little validation of the methodology's accuracy, including whether a single scan provides sufficient coverage. In this paper, we analyze how scan origin affects the results of Internet-wide scans by completing three HTTP, HTTPS, and SSH scans from seven geographically and topologically diverse networks. We find that individual origins miss an average 1.6-8.4% of HTTP, 1.5-4.6% of HTTPS, and 8.3-18.2% of SSH hosts. We analyze why origins see different hosts, and show how permanent and temporary blocking, packet loss, geographic biases, and transient outages affect scan results. We discuss the implications for scanning and provide recommendations for future studies.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128539047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Are You Human?: Resilience of Phishing Detection to Evasion Techniques Based on Human Verification 你是人类吗?:基于人类验证的网络钓鱼检测对逃避技术的弹性
Pub Date : 2020-10-27 DOI: 10.1145/3419394.3423632
S. Maroofi, Maciej Korczyński, A. Duda
Phishing is one of the most common cyberattacks these days. Attackers constantly look for new techniques to make their campaigns more lucrative by extending the lifespan of phishing pages. To achieve this goal, they leverage different anti-analysis (i.e., evasion) techniques to conceal the malicious content from anti-phishing bots and only reveal the payload to potential victims. In this paper, we study the resilience of anti-phishing entities to three advanced anti-analysis techniques based on human verification: Google re-CAPTCHA, alert box, and session-based evasion. We have designed a framework for performing our testing experiments, deployed 105 phishing websites, and provided each of them with one of the three evasion techniques. In the experiments, we report phishing URLs to major server-side anti-phishing entities (e.g., Google Safe Browsing, NetCraft, APWG) and monitor their occurrence in the blacklists. Our results show that Google Safe Browsing was the only engine that detected all the reported URLs protected by alert boxes. However, none of the anti-phishing engines could detect phishing URLs armed with Google re-CAPTCHA, making it so far the most effective protection solution of phishing content available to malicious actors. Our experiments show that all the major serverside anti-phishing bots only detected 8 out of 105 phishing websites protected by human verification systems. As a mitigation plan, we intend to disclose our findings to the impacted anti-phishing entities before phishers exploit human verification techniques on a massive scale.
网络钓鱼是当今最常见的网络攻击之一。攻击者不断寻找新技术,通过延长网络钓鱼页面的生命周期,使他们的活动更有利可图。为了实现这一目标,他们利用不同的反分析(即逃避)技术来隐藏反网络钓鱼机器人的恶意内容,只向潜在的受害者显示有效载荷。在本文中,我们研究了反钓鱼实体对三种基于人工验证的高级反分析技术的弹性:谷歌重新验证码、警报框和基于会话的逃避。我们设计了一个框架来执行我们的测试实验,部署了105个钓鱼网站,并为每个网站提供了三种逃避技术中的一种。在实验中,我们向主要的服务器端反网络钓鱼实体(例如,Google Safe Browsing, NetCraft, APWG)报告网络钓鱼url,并监控其在黑名单中的出现情况。我们的结果显示,谷歌安全浏览是唯一的引擎,检测到所有报告的url保护的警告框。然而,没有任何反网络钓鱼引擎能够检测到带有谷歌重新验证码的网络钓鱼网址,这使得它成为迄今为止恶意行为者可用的最有效的网络钓鱼内容保护解决方案。我们的实验表明,所有主要的服务器端反网络钓鱼机器人只检测到105个受人工验证系统保护的网络钓鱼网站中的8个。作为一项缓解计划,我们打算在钓鱼者大规模利用人工验证技术之前,向受影响的反钓鱼实体披露我们的发现。
{"title":"Are You Human?: Resilience of Phishing Detection to Evasion Techniques Based on Human Verification","authors":"S. Maroofi, Maciej Korczyński, A. Duda","doi":"10.1145/3419394.3423632","DOIUrl":"https://doi.org/10.1145/3419394.3423632","url":null,"abstract":"Phishing is one of the most common cyberattacks these days. Attackers constantly look for new techniques to make their campaigns more lucrative by extending the lifespan of phishing pages. To achieve this goal, they leverage different anti-analysis (i.e., evasion) techniques to conceal the malicious content from anti-phishing bots and only reveal the payload to potential victims. In this paper, we study the resilience of anti-phishing entities to three advanced anti-analysis techniques based on human verification: Google re-CAPTCHA, alert box, and session-based evasion. We have designed a framework for performing our testing experiments, deployed 105 phishing websites, and provided each of them with one of the three evasion techniques. In the experiments, we report phishing URLs to major server-side anti-phishing entities (e.g., Google Safe Browsing, NetCraft, APWG) and monitor their occurrence in the blacklists. Our results show that Google Safe Browsing was the only engine that detected all the reported URLs protected by alert boxes. However, none of the anti-phishing engines could detect phishing URLs armed with Google re-CAPTCHA, making it so far the most effective protection solution of phishing content available to malicious actors. Our experiments show that all the major serverside anti-phishing bots only detected 8 out of 105 phishing websites protected by human verification systems. As a mitigation plan, we intend to disclose our findings to the impacted anti-phishing entities before phishers exploit human verification techniques on a massive scale.","PeriodicalId":255324,"journal":{"name":"Proceedings of the ACM Internet Measurement Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126264382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
Proceedings of the ACM Internet Measurement Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1