首页 > 最新文献

Proceedings of the Internet Measurement Conference 2018最新文献

英文 中文
Cache Me If You Can: Effects of DNS Time-to-Live 缓存我,如果你可以:DNS生存时间的影响
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355568
G. Moura, J. Heidemann, R. Schmidt, W. Hardaker
DNS depends on extensive caching for good performance, and every DNS zone owner must set Time-to-Live (TTL) values to control their DNS caching. Today there is relatively little guidance backed by research about how to set TTLs, and operators must balance conflicting demands of caching against agility of configuration. Exactly how TTL value choices affect operational networks is quite challenging to understand due to interactions across the distributed DNS service, where resolvers receive TTLs in different ways (answers and hints), TTLs are specified in multiple places (zones and their parent's glue), and while DNS resolution must be security-aware. This paper provides the first careful evaluation of how these multiple, interacting factors affect the effective cache lifetimes of DNS records, and provides recommendations for how to configure DNS TTLs based on our findings. We provide recommendations in TTL choice for different situations, and for where they must be configured. We show that longer TTLs have significant promise in reducing latency, reducing it from 183 ms to 28.7 ms for one country-code TLD.
DNS依赖于广泛的缓存来获得良好的性能,每个DNS区域所有者必须设置生存时间(TTL)值来控制其DNS缓存。目前,关于如何设置ttl的研究支持的指导相对较少,操作人员必须平衡缓存的冲突需求和配置的敏捷性。由于分布式DNS服务之间的交互,TTL值选择究竟如何影响操作网络是相当具有挑战性的,解析器以不同的方式接收TTL(答案和提示),TTL在多个地方指定(区域及其父胶水),而DNS解析必须具有安全意识。本文首次仔细评估了这些多重相互作用的因素如何影响DNS记录的有效缓存生命周期,并根据我们的发现提供了如何配置DNS ttl的建议。我们针对不同的情况以及在哪些地方必须配置TTL提供了建议。我们表明,较长的ttl在减少延迟方面有很大的希望,将一个国家代码TLD的延迟从183 ms减少到28.7 ms。
{"title":"Cache Me If You Can: Effects of DNS Time-to-Live","authors":"G. Moura, J. Heidemann, R. Schmidt, W. Hardaker","doi":"10.1145/3355369.3355568","DOIUrl":"https://doi.org/10.1145/3355369.3355568","url":null,"abstract":"DNS depends on extensive caching for good performance, and every DNS zone owner must set Time-to-Live (TTL) values to control their DNS caching. Today there is relatively little guidance backed by research about how to set TTLs, and operators must balance conflicting demands of caching against agility of configuration. Exactly how TTL value choices affect operational networks is quite challenging to understand due to interactions across the distributed DNS service, where resolvers receive TTLs in different ways (answers and hints), TTLs are specified in multiple places (zones and their parent's glue), and while DNS resolution must be security-aware. This paper provides the first careful evaluation of how these multiple, interacting factors affect the effective cache lifetimes of DNS records, and provides recommendations for how to configure DNS TTLs based on our findings. We provide recommendations in TTL choice for different situations, and for where they must be configured. We show that longer TTLs have significant promise in reducing latency, reducing it from 183 ms to 28.7 ms for one country-code TLD.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"187 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81092462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Profiling BGP Serial Hijackers: Capturing Persistent Misbehavior in the Global Routing Table 分析BGP串行劫机者:捕获全局路由表中的持久错误行为
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355581
Cecilia Testart, P. Richter, Alistair King, A. Dainotti, D. Clark
BGP hijacks remain an acute problem in today's Internet, with widespread consequences. While hijack detection systems are readily available, they typically rely on a priori prefix-ownership information and are reactive in nature. In this work, we take on a new perspective on BGP hijacking activity: we introduce and track the long-term routing behavior of serial hijackers, networks that repeatedly hijack address blocks for malicious purposes, often over the course of many months or even years. Based on a ground truth dataset that we construct by extracting information from network operator mailing lists, we illuminate the dominant routing characteristics of serial hijackers, and how they differ from legitimate networks. We then distill features that can capture these behavioral differences and train a machine learning model to automatically identify Autonomous Systems (ASes) that exhibit characteristics similar to serial hijackers. Our classifier identifies ≈ 900 ASes with similar behavior in the global IPv4 routing table. We analyze and categorize these networks, finding a wide range of indicators of malicious activity, misconfiguration, as well as benign hijacking activity. Our work presents a solid first step towards identifying and understanding this important category of networks, which can aid network operators in taking proactive measures to defend themselves against prefix hijacking and serve as input for current and future detection systems.
BGP劫持在当今的互联网中仍然是一个严重的问题,其后果非常广泛。虽然劫持检测系统很容易获得,但它们通常依赖于先验的前缀所有权信息,并且本质上是被动的。在这项工作中,我们对BGP劫持活动采取了新的视角:我们介绍并跟踪了连环劫机者的长期路由行为,这些网络经常在几个月甚至几年的时间里反复劫持地址块以达到恶意目的。基于我们从网络运营商邮件列表中提取信息构建的真实数据集,我们阐明了连环劫机者的主要路由特征,以及它们与合法网络的不同之处。然后,我们提取可以捕捉这些行为差异的特征,并训练机器学习模型来自动识别表现出与连环劫机者相似特征的自治系统(ase)。我们的分类器在全局IPv4路由表中识别出约900个具有相似行为的ase。我们对这些网络进行了分析和分类,发现了各种恶意活动、错误配置以及良性劫持活动的指标。我们的工作为识别和理解这一重要网络类别迈出了坚实的第一步,这可以帮助网络运营商采取主动措施保护自己免受前缀劫持,并作为当前和未来检测系统的输入。
{"title":"Profiling BGP Serial Hijackers: Capturing Persistent Misbehavior in the Global Routing Table","authors":"Cecilia Testart, P. Richter, Alistair King, A. Dainotti, D. Clark","doi":"10.1145/3355369.3355581","DOIUrl":"https://doi.org/10.1145/3355369.3355581","url":null,"abstract":"BGP hijacks remain an acute problem in today's Internet, with widespread consequences. While hijack detection systems are readily available, they typically rely on a priori prefix-ownership information and are reactive in nature. In this work, we take on a new perspective on BGP hijacking activity: we introduce and track the long-term routing behavior of serial hijackers, networks that repeatedly hijack address blocks for malicious purposes, often over the course of many months or even years. Based on a ground truth dataset that we construct by extracting information from network operator mailing lists, we illuminate the dominant routing characteristics of serial hijackers, and how they differ from legitimate networks. We then distill features that can capture these behavioral differences and train a machine learning model to automatically identify Autonomous Systems (ASes) that exhibit characteristics similar to serial hijackers. Our classifier identifies ≈ 900 ASes with similar behavior in the global IPv4 routing table. We analyze and categorize these networks, finding a wide range of indicators of malicious activity, misconfiguration, as well as benign hijacking activity. Our work presents a solid first step towards identifying and understanding this important category of networks, which can aid network operators in taking proactive measures to defend themselves against prefix hijacking and serve as input for current and future detection systems.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78845610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Scanning the Scanners: Sensing the Internet from a Massively Distributed Network Telescope 扫描扫描器:从大规模分布式网络望远镜感知互联网
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355595
P. Richter, A. Berger
Scanning of hosts on the Internet to identify vulnerable devices and services is a key component in many of today's cyberattacks. Tracking this scanning activity, in turn, provides an excellent signal to assess the current state-of-affairs for many vulnerabilities and their exploitation. So far, studies tracking scanning activity have relied on unsolicited traffic captured in darknets, focusing on random scans of the address space. In this work, we track scanning activity through the lens of unsolicited traffic captured at the firewalls of some 89,000 hosts of a major Content Distribution Network (CDN). Our vantage point has two distinguishing features compared to darknets: (i) it is distributed across some 1,300 networks, and (ii) its servers are live, offering services and thus emitting traffic. While all servers receive a baseline level of probing from Internet-wide scans, i.e., scans targeting random subsets of or the entire IPv4 space, we show that some 30% of all logged scan traffic is the result of localized scans. We find that localized scanning campaigns often target narrow regions in the address space, and that their characteristics in terms of target selection strategy and scanned services differ vastly from the more widely known Internet-wide scans. Our observations imply that conventional darknets can only partially illuminate scanning activity, and may severely underestimate widespread attempts to scan and exploit individual services in specific prefixes or networks. Our methods can be adapted for individual network operators to assess if they are subjected to targeted scanning activity.
扫描互联网上的主机以识别易受攻击的设备和服务是当今许多网络攻击的关键组成部分。跟踪这种扫描活动反过来又为评估许多漏洞及其利用的当前状态提供了一个很好的信号。到目前为止,追踪扫描活动的研究依赖于在黑暗中捕获的未经请求的流量,专注于地址空间的随机扫描。在这项工作中,我们通过在主要内容分发网络(CDN)的约89,000台主机的防火墙捕获的未经请求的流量镜头跟踪扫描活动。与暗网相比,我们的优势点有两个显著的特点:(i)它分布在大约1300个网络上,(ii)它的服务器是实时的,提供服务并因此发出流量。虽然所有服务器都从互联网范围的扫描(即针对随机子集或整个IPv4空间的扫描)中接受基线水平的探测,但我们发现,所有记录的扫描流量中约有30%是本地化扫描的结果。我们发现本地化扫描活动通常针对地址空间中的狭窄区域,并且它们在目标选择策略和扫描服务方面的特征与更广为人知的互联网范围扫描有很大不同。我们的观察表明,传统的暗网只能部分说明扫描活动,并且可能严重低估了扫描和利用特定前缀或网络中的单个服务的广泛尝试。我们的方法可以适用于个别网络运营商,以评估他们是否受到目标扫描活动。
{"title":"Scanning the Scanners: Sensing the Internet from a Massively Distributed Network Telescope","authors":"P. Richter, A. Berger","doi":"10.1145/3355369.3355595","DOIUrl":"https://doi.org/10.1145/3355369.3355595","url":null,"abstract":"Scanning of hosts on the Internet to identify vulnerable devices and services is a key component in many of today's cyberattacks. Tracking this scanning activity, in turn, provides an excellent signal to assess the current state-of-affairs for many vulnerabilities and their exploitation. So far, studies tracking scanning activity have relied on unsolicited traffic captured in darknets, focusing on random scans of the address space. In this work, we track scanning activity through the lens of unsolicited traffic captured at the firewalls of some 89,000 hosts of a major Content Distribution Network (CDN). Our vantage point has two distinguishing features compared to darknets: (i) it is distributed across some 1,300 networks, and (ii) its servers are live, offering services and thus emitting traffic. While all servers receive a baseline level of probing from Internet-wide scans, i.e., scans targeting random subsets of or the entire IPv4 space, we show that some 30% of all logged scan traffic is the result of localized scans. We find that localized scanning campaigns often target narrow regions in the address space, and that their characteristics in terms of target selection strategy and scanned services differ vastly from the more widely known Internet-wide scans. Our observations imply that conventional darknets can only partially illuminate scanning activity, and may severely underestimate widespread attempts to scan and exploit individual services in specific prefixes or networks. Our methods can be adapted for individual network operators to assess if they are subjected to targeted scanning activity.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74631827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
How Cloud Traffic Goes Hiding: A Study of Amazon's Peering Fabric 云流量是如何隐藏起来的:对亚马逊对等网络的研究
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355602
B. Yeganeh, Ramakrishnan Durairajan, R. Rejaie, W. Willinger
The growing demand for an ever-increasing number of cloud services is profoundly transforming the Internet's interconnection or peering ecosystem, and one example is the emergence of "virtual private interconnections (VPIs)". However, due to the underlying technologies, these VPIs are not publicly visible and traffic traversing them remains largely hidden as it bypasses the public Internet. In particular, existing techniques for inferring Internet interconnections are unable to detect these VPIs and are also incapable of mapping them to the physical facility or geographic region where they are established. In this paper, we present a third-party measurement study aimed at revealing all the peerings between Amazon and the rest of the Internet. We describe our technique for inferring these peering links and pay special attention to inferring the VPIs associated with this largest cloud provider. We also present and evaluate a new method for pinning (i.e., geo-locating) each end of the inferred interconnections or peering links. Our study provides a first look at Amazon's peering fabric. In particular, by grouping Amazon's peerings based on their key features, we illustrate the specific role that each group plays in how Amazon peers with other networks.
对越来越多的云服务的需求不断增长,正在深刻改变互联网互联或对等生态系统,其中一个例子是“虚拟专用互联(vpi)”的出现。然而,由于底层技术的原因,这些vpi是不公开可见的,穿越它们的流量在很大程度上仍然是隐藏的,因为它绕过了公共互联网。特别是,现有的推断互联网互连的技术无法检测到这些虚拟网关,也无法将它们映射到建立它们的物理设施或地理区域。在本文中,我们提出了一个第三方测量研究,旨在揭示亚马逊和其他互联网之间的所有差距。我们描述了推断这些对等链路的技术,并特别注意推断与这个最大的云提供商相关的vpi。我们还提出并评估了一种用于固定(即,地理定位)推断互连或对等链路的每端的新方法。我们的研究提供了对亚马逊对等网络结构的初步了解。特别是,通过根据关键特征对亚马逊的对等点进行分组,我们说明了每个组在亚马逊如何与其他网络对等时所扮演的特定角色。
{"title":"How Cloud Traffic Goes Hiding: A Study of Amazon's Peering Fabric","authors":"B. Yeganeh, Ramakrishnan Durairajan, R. Rejaie, W. Willinger","doi":"10.1145/3355369.3355602","DOIUrl":"https://doi.org/10.1145/3355369.3355602","url":null,"abstract":"The growing demand for an ever-increasing number of cloud services is profoundly transforming the Internet's interconnection or peering ecosystem, and one example is the emergence of \"virtual private interconnections (VPIs)\". However, due to the underlying technologies, these VPIs are not publicly visible and traffic traversing them remains largely hidden as it bypasses the public Internet. In particular, existing techniques for inferring Internet interconnections are unable to detect these VPIs and are also incapable of mapping them to the physical facility or geographic region where they are established. In this paper, we present a third-party measurement study aimed at revealing all the peerings between Amazon and the rest of the Internet. We describe our technique for inferring these peering links and pay special attention to inferring the VPIs associated with this largest cloud provider. We also present and evaluate a new method for pinning (i.e., geo-locating) each end of the inferred interconnections or peering links. Our study provides a first look at Amazon's peering fabric. In particular, by grouping Amazon's peerings based on their key features, we illustrate the specific role that each group plays in how Amazon peers with other networks.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75712969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Tales from the Porn: A Comprehensive Privacy Analysis of the Web Porn Ecosystem 色情故事:网络色情生态系统的全面隐私分析
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355583
Pelayo Vallina, Álvaro Feal, Julien Gamba, N. Vallina-Rodriguez, A. Fernández
Modern privacy regulations, including the General Data Protection Regulation (GDPR) in the European Union, aim to control user tracking activities in websites and mobile applications. These privacy rules typically contain specific provisions and strict requirements for websites that provide sensitive material to end users such as sexual, religious, and health services. However, little is known about the privacy risks that users face when visiting such websites, and about their regulatory compliance. In this paper, we present the first comprehensive and large-scale analysis of 6,843 pornographic websites. We provide an exhaustive behavioral analysis of the use of tracking methods by these websites, and their lack of regulatory compliance, including the absence of age-verification mechanisms and methods to obtain informed user consent. The results indicate that, as in the regular web, tracking is prevalent across pornographic sites: 72% of the websites use third-party cookies and 5% leverage advanced user fingerprinting technologies. Yet, our analysis reveals a third-party tracking ecosystem semi-decoupled from the regular web in which various analytics and advertising services track users across, and outside, pornographic websites. We complete the paper with a regulatory compliance analysis in the context of the EU GDPR, and newer legal requirements to implement verifiable access control mechanisms (e.g., UK's Digital Economy Act). We find that only 16% of the analyzed websites have an accessible privacy policy and only 4% provide a cookie consent banner. The use of verifiable access control mechanisms is limited to prominent pornographic websites.
现代隐私法规,包括欧盟的通用数据保护条例(GDPR),旨在控制用户在网站和移动应用程序中的跟踪活动。这些隐私规则通常包含对向最终用户提供敏感材料(如性、宗教和健康服务)的网站的具体规定和严格要求。然而,人们对用户在访问此类网站时面临的隐私风险以及它们的监管合规性知之甚少。本文首次对6843个色情网站进行了全面、大规模的分析。我们对这些网站使用的跟踪方法进行了详尽的行为分析,并对其缺乏法规遵从性进行了分析,包括缺乏年龄验证机制和获得知情用户同意的方法。调查结果表明,与普通网络一样,色情网站也普遍存在跟踪行为:72%的网站使用第三方cookie, 5%的网站利用先进的用户指纹识别技术。然而,我们的分析显示,第三方跟踪生态系统与常规网络半分离,在常规网络中,各种分析和广告服务跟踪色情网站内外的用户。我们在欧盟GDPR的背景下进行了法规遵从性分析,以及实施可验证访问控制机制的新法律要求(例如,英国的数字经济法)。我们发现,在分析的网站中,只有16%的网站有可访问的隐私政策,只有4%的网站提供cookie同意横幅。可验证访问控制机制的使用仅限于突出的色情网站。
{"title":"Tales from the Porn: A Comprehensive Privacy Analysis of the Web Porn Ecosystem","authors":"Pelayo Vallina, Álvaro Feal, Julien Gamba, N. Vallina-Rodriguez, A. Fernández","doi":"10.1145/3355369.3355583","DOIUrl":"https://doi.org/10.1145/3355369.3355583","url":null,"abstract":"Modern privacy regulations, including the General Data Protection Regulation (GDPR) in the European Union, aim to control user tracking activities in websites and mobile applications. These privacy rules typically contain specific provisions and strict requirements for websites that provide sensitive material to end users such as sexual, religious, and health services. However, little is known about the privacy risks that users face when visiting such websites, and about their regulatory compliance. In this paper, we present the first comprehensive and large-scale analysis of 6,843 pornographic websites. We provide an exhaustive behavioral analysis of the use of tracking methods by these websites, and their lack of regulatory compliance, including the absence of age-verification mechanisms and methods to obtain informed user consent. The results indicate that, as in the regular web, tracking is prevalent across pornographic sites: 72% of the websites use third-party cookies and 5% leverage advanced user fingerprinting technologies. Yet, our analysis reveals a third-party tracking ecosystem semi-decoupled from the regular web in which various analytics and advertising services track users across, and outside, pornographic websites. We complete the paper with a regulatory compliance analysis in the context of the EU GDPR, and newer legal requirements to implement verifiable access control mechanisms (e.g., UK's Digital Economy Act). We find that only 16% of the analyzed websites have an accessible privacy policy and only 4% provide a cookie consent banner. The use of verifiable access control mechanisms is limited to prominent pornographic websites.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90365866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Information Exposure From Consumer IoT Devices: A Multidimensional, Network-Informed Measurement Approach 来自消费者物联网设备的信息暴露:一种多维的、网络知情的测量方法
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355577
Jingjing Ren, Daniel J. Dubois, D. Choffnes, A. Mandalari, Roman Kolcun, H. Haddadi
Internet of Things (IoT) devices are increasingly found in everyday homes, providing useful functionality for devices such as TVs, smart speakers, and video doorbells. Along with their benefits come potential privacy risks, since these devices can communicate information about their users to other parties over the Internet. However, understanding these risks in depth and at scale is difficult due to heterogeneity in devices' user interfaces, protocols, and functionality. In this work, we conduct a multidimensional analysis of information exposure from 81 devices located in labs in the US and UK. Through a total of 34,586 rigorous automated and manual controlled experiments, we characterize information exposure in terms of destinations of Internet traffic, whether the contents of communication are protected by encryption, what are the IoT-device interactions that can be inferred from such content, and whether there are unexpected exposures of private and/or sensitive information (e.g., video surreptitiously transmitted by a recording device). We highlight regional differences between these results, potentially due to different privacy regulations in the US and UK. Last, we compare our controlled experiments with data gathered from an in situ user study comprising 36 participants.
物联网(IoT)设备越来越多地出现在日常家庭中,为电视、智能扬声器和视频门铃等设备提供有用的功能。伴随着这些好处的是潜在的隐私风险,因为这些设备可以通过互联网将用户的信息传递给其他方。然而,由于设备的用户界面、协议和功能的异质性,深入和大规模地理解这些风险是困难的。在这项工作中,我们对位于美国和英国实验室的81台设备的信息暴露进行了多维分析。通过总共34,586项严格的自动化和人工控制实验,我们根据互联网流量的目的地、通信内容是否受到加密保护、从这些内容中可以推断出物联网设备之间的交互是什么、以及是否存在私人和/或敏感信息的意外暴露(例如,通过录制设备秘密传输的视频)来表征信息暴露。我们强调了这些结果之间的地区差异,这可能是由于美国和英国不同的隐私法规造成的。最后,我们将对照实验与36名参与者的现场用户研究收集的数据进行比较。
{"title":"Information Exposure From Consumer IoT Devices: A Multidimensional, Network-Informed Measurement Approach","authors":"Jingjing Ren, Daniel J. Dubois, D. Choffnes, A. Mandalari, Roman Kolcun, H. Haddadi","doi":"10.1145/3355369.3355577","DOIUrl":"https://doi.org/10.1145/3355369.3355577","url":null,"abstract":"Internet of Things (IoT) devices are increasingly found in everyday homes, providing useful functionality for devices such as TVs, smart speakers, and video doorbells. Along with their benefits come potential privacy risks, since these devices can communicate information about their users to other parties over the Internet. However, understanding these risks in depth and at scale is difficult due to heterogeneity in devices' user interfaces, protocols, and functionality. In this work, we conduct a multidimensional analysis of information exposure from 81 devices located in labs in the US and UK. Through a total of 34,586 rigorous automated and manual controlled experiments, we characterize information exposure in terms of destinations of Internet traffic, whether the contents of communication are protected by encryption, what are the IoT-device interactions that can be inferred from such content, and whether there are unexpected exposures of private and/or sensitive information (e.g., video surreptitiously transmitted by a recording device). We highlight regional differences between these results, potentially due to different privacy regulations in the US and UK. Last, we compare our controlled experiments with data gathered from an in situ user study comprising 36 participants.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88496143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 192
DNS Observatory: The Big Picture of the DNS DNS观测站:DNS的大图
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355566
Pawel Foremski, Oliver Gasser, G. Moura
The Domain Name System (DNS) is thought of as having the simple-sounding task of resolving domains into IP addresses. With its stub resolvers, different layers of recursive resolvers, authoritative nameservers, a multitude of query types, and DNSSEC, the DNS ecosystem is actually quite complex. In this paper, we introduce DNS Observatory: a new stream analytics platform that provides a bird's-eye view on the DNS. As the data source, we leverage a large stream of passive DNS observations produced by hundreds of globally distributed probes, acquiring a peak of 200 k DNS queries per second between recursive resolvers and authoritative nameservers. For each observed DNS transaction, we extract traffic features, aggregate them, and track the top-k DNS objects, e.g., the top authoritative nameserver IP addresses or the top domains. We analyze 1.6 trillion DNS transactions over a four month period. This allows us to characterize DNS deployments and traffic patterns, evaluate its associated infrastructure and performance, as well as gain insight into the modern additions to the DNS and related Internet protocols. We find an alarming concentration of DNS traffic: roughly half of the observed traffic is handled by only 1 k authoritative nameservers and by 10 AS operators. By evaluating the median delay of DNS queries, we find that the top 10 k nameservers have indeed a shorter response time than less popular nameservers, which is correlated with less router hops. We also study how DNS TTL adjustments can impact query volumes, anticipate upcoming changes to DNS infrastructure, and how negative caching TTLs affect the Happy Eyeballs algorithm. We find some popular domains with a a share of up to 90 % of empty DNS responses due to short negative caching TTLs. We propose actionable measures to improve uncovered DNS shortcomings.
域名系统(DNS)被认为具有将域解析为IP地址的简单任务。由于它的存根解析器、不同层次的递归解析器、权威名称服务器、多种查询类型和DNSSEC, DNS生态系统实际上相当复杂。在本文中,我们介绍了一个新的流分析平台DNS Observatory,它提供了对DNS的鸟瞰。作为数据源,我们利用由数百个全球分布式探测器产生的大量被动DNS观察数据流,在递归解析器和权威名称服务器之间获得每秒200 k DNS查询的峰值。对于每个观察到的DNS交易,我们提取流量特征,汇总它们,并跟踪top-k DNS对象,例如,顶级权威域名服务器IP地址或顶级域。我们在四个月的时间里分析了1.6万亿DNS交易。这使我们能够描述DNS部署和流量模式,评估其相关的基础设施和性能,以及深入了解DNS和相关互联网协议的现代附加功能。我们发现了一个令人担忧的DNS流量集中:大约一半的观察到的流量仅由1 k权威名称服务器和10个AS运营商处理。通过评估DNS查询的中位数延迟,我们发现排名前10 k的域名服务器确实比不太受欢迎的域名服务器具有更短的响应时间,这与较少的路由器跳数相关。我们还研究了DNS TTL调整如何影响查询量,预测DNS基础设施即将发生的变化,以及负面缓存TTL如何影响Happy Eyeballs算法。我们发现一些受欢迎的域名由于短的负缓存ttl而占空DNS响应的份额高达90%。我们提出可行的措施来改进未发现的DNS缺陷。
{"title":"DNS Observatory: The Big Picture of the DNS","authors":"Pawel Foremski, Oliver Gasser, G. Moura","doi":"10.1145/3355369.3355566","DOIUrl":"https://doi.org/10.1145/3355369.3355566","url":null,"abstract":"The Domain Name System (DNS) is thought of as having the simple-sounding task of resolving domains into IP addresses. With its stub resolvers, different layers of recursive resolvers, authoritative nameservers, a multitude of query types, and DNSSEC, the DNS ecosystem is actually quite complex. In this paper, we introduce DNS Observatory: a new stream analytics platform that provides a bird's-eye view on the DNS. As the data source, we leverage a large stream of passive DNS observations produced by hundreds of globally distributed probes, acquiring a peak of 200 k DNS queries per second between recursive resolvers and authoritative nameservers. For each observed DNS transaction, we extract traffic features, aggregate them, and track the top-k DNS objects, e.g., the top authoritative nameserver IP addresses or the top domains. We analyze 1.6 trillion DNS transactions over a four month period. This allows us to characterize DNS deployments and traffic patterns, evaluate its associated infrastructure and performance, as well as gain insight into the modern additions to the DNS and related Internet protocols. We find an alarming concentration of DNS traffic: roughly half of the observed traffic is handled by only 1 k authoritative nameservers and by 10 AS operators. By evaluating the median delay of DNS queries, we find that the top 10 k nameservers have indeed a shorter response time than less popular nameservers, which is correlated with less router hops. We also study how DNS TTL adjustments can impact query volumes, anticipate upcoming changes to DNS infrastructure, and how negative caching TTLs affect the Happy Eyeballs algorithm. We find some popular domains with a a share of up to 90 % of empty DNS responses due to short negative caching TTLs. We propose actionable measures to improve uncovered DNS shortcomings.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87829936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Booting the Booters: Evaluating the Effects of Police Interventions in the Market for Denial-of-Service Attacks 引导引导者:评估警察干预市场对拒绝服务攻击的影响
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355592
Ben Collier, Daniel R. Thomas, R. Clayton, Alice Hutchings
Illegal booter services offer denial of service (DoS) attacks for a fee of a few tens of dollars a month. Internationally, police have implemented a range of different types of intervention aimed at those using and offering booter services, including arrests and website takedown. In order to measure the impact of these interventions we look at the usage reports that booters themselves provide and at measurements of reflected UDP DoS attacks, leveraging a five year measurement dataset that has been statistically demonstrated to have very high coverage. We analysed time series data (using a negative binomial regression model) to show that several interventions have had a statistically significant impact on the number of attacks. We show that, while there is no consistent effect of highly-publicised court cases, takedowns of individual booters precede significant, but short-lived, reductions in recorded attack numbers. However, more wide-ranging disruptions have much longer effects. The closure of HackForums' booter market reduced attacks for 13 weeks globally (and for longer in particular countries) and the FBI's coordinated operation in December 2018, which involved both takedowns and arrests, reduced attacks by a third for at least 10 weeks and resulted in lasting change to the structure of the booter market.
非法引导服务提供拒绝服务(DoS)攻击,每月收取几十美元的费用。在国际上,警方已经实施了一系列不同类型的干预措施,针对那些使用和提供引导服务的人,包括逮捕和关闭网站。为了衡量这些干预措施的影响,我们查看了引导程序自己提供的使用报告和反映的UDP DoS攻击的测量,利用了一个五年的测量数据集,该数据集在统计上被证明具有非常高的覆盖率。我们分析了时间序列数据(使用负二项回归模型),以显示几种干预措施对攻击次数有统计上显著的影响。我们表明,虽然高度公开的法庭案件没有一致的影响,但个别靴子的拆除先于记录的攻击数量显著但短暂的减少。然而,更大范围的中断会产生更长期的影响。黑客论坛启动市场的关闭使全球范围内的攻击减少了13周(某些国家的攻击时间更长),联邦调查局在2018年12月的协调行动,包括拆除和逮捕,使攻击减少了三分之一,至少持续了10周,并导致启动市场结构的持久变化。
{"title":"Booting the Booters: Evaluating the Effects of Police Interventions in the Market for Denial-of-Service Attacks","authors":"Ben Collier, Daniel R. Thomas, R. Clayton, Alice Hutchings","doi":"10.1145/3355369.3355592","DOIUrl":"https://doi.org/10.1145/3355369.3355592","url":null,"abstract":"Illegal booter services offer denial of service (DoS) attacks for a fee of a few tens of dollars a month. Internationally, police have implemented a range of different types of intervention aimed at those using and offering booter services, including arrests and website takedown. In order to measure the impact of these interventions we look at the usage reports that booters themselves provide and at measurements of reflected UDP DoS attacks, leveraging a five year measurement dataset that has been statistically demonstrated to have very high coverage. We analysed time series data (using a negative binomial regression model) to show that several interventions have had a statistically significant impact on the number of attacks. We show that, while there is no consistent effect of highly-publicised court cases, takedowns of individual booters precede significant, but short-lived, reductions in recorded attack numbers. However, more wide-ranging disruptions have much longer effects. The closure of HackForums' booter market reduced attacks for 13 weeks globally (and for longer in particular countries) and the FBI's coordinated operation in December 2018, which involved both takedowns and arrests, reduced attacks by a third for at least 10 weeks and resulted in lasting change to the structure of the booter market.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79305321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Errors, Misunderstandings, and Attacks: Analyzing the Crowdsourcing Process of Ad-blocking Systems 错误、误解和攻击:分析广告拦截系统的众包过程
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355588
Mshabab Alrizah, Sencun Zhu, Xinyu Xing, Gang Wang
Ad-blocking systems such as Adblock Plus rely on crowdsourcing to build and maintain filter lists, which are the basis for determining which ads to block on web pages. In this work, we seek to advance our understanding of the ad-blocking community as well as the errors and pitfalls of the crowdsourcing process. To do so, we collected and analyzed a longitudinal dataset that covered the dynamic changes of popular filter-list EasyList for nine years and the error reports submitted by the crowd in the same period. Our study yielded a number of significant findings regarding the characteristics of FP and FN errors and their causes. For instances, we found that false positive errors (i.e., incorrectly blocking legitimate content) still took a long time before they could be discovered (50% of them took more than a month) despite the community effort. Both EasyList editors and website owners were to blame for the false positives. In addition, we found that a great number of false negative errors (i.e., failing to block real advertisements) were either incorrectly reported or simply ignored by the editors. Furthermore, we analyzed evasion attacks from ad publishers against ad-blockers. In total, our analysis covers 15 types of attack methods including 8 methods that have not been studied by the research community. We show how ad publishers have utilized them to circumvent ad-blockers and empirically measure the reactions of ad blockers. Through in-depth analysis, our findings are expected to help shed light on any future work to evolve ad blocking and optimize crowdsourcing mechanisms.
像Adblock Plus这样的广告拦截系统依靠众包来建立和维护过滤列表,这是决定在网页上拦截哪些广告的基础。在这项工作中,我们试图提高我们对广告拦截社区以及众包过程中的错误和陷阱的理解。为此,我们收集并分析了一个纵向数据集,该数据集涵盖了流行过滤列表EasyList 9年的动态变化以及同期人群提交的错误报告。我们的研究产生了一些关于FP和FN错误的特征及其原因的重要发现。例如,尽管社区做出了努力,我们发现误报错误(即错误地阻止合法内容)仍然需要很长时间才能被发现(其中50%需要一个多月)。EasyList的编辑和网站所有者都应该为误报负责。此外,我们发现大量的假阴性错误(即未能屏蔽真实广告)要么被错误报道,要么被编辑忽略。此外,我们分析了广告发布商对广告拦截器的规避攻击。总的来说,我们的分析涵盖了15种攻击方法,其中包括8种尚未被研究界研究的方法。我们展示了广告发布商如何利用它们来规避广告拦截器,并根据经验衡量广告拦截器的反应。通过深入分析,我们的研究结果有望为未来发展广告拦截和优化众包机制的工作提供帮助。
{"title":"Errors, Misunderstandings, and Attacks: Analyzing the Crowdsourcing Process of Ad-blocking Systems","authors":"Mshabab Alrizah, Sencun Zhu, Xinyu Xing, Gang Wang","doi":"10.1145/3355369.3355588","DOIUrl":"https://doi.org/10.1145/3355369.3355588","url":null,"abstract":"Ad-blocking systems such as Adblock Plus rely on crowdsourcing to build and maintain filter lists, which are the basis for determining which ads to block on web pages. In this work, we seek to advance our understanding of the ad-blocking community as well as the errors and pitfalls of the crowdsourcing process. To do so, we collected and analyzed a longitudinal dataset that covered the dynamic changes of popular filter-list EasyList for nine years and the error reports submitted by the crowd in the same period. Our study yielded a number of significant findings regarding the characteristics of FP and FN errors and their causes. For instances, we found that false positive errors (i.e., incorrectly blocking legitimate content) still took a long time before they could be discovered (50% of them took more than a month) despite the community effort. Both EasyList editors and website owners were to blame for the false positives. In addition, we found that a great number of false negative errors (i.e., failing to block real advertisements) were either incorrectly reported or simply ignored by the editors. Furthermore, we analyzed evasion attacks from ad publishers against ad-blockers. In total, our analysis covers 15 types of attack methods including 8 methods that have not been studied by the research community. We show how ad publishers have utilized them to circumvent ad-blockers and empirically measure the reactions of ad blockers. Through in-depth analysis, our findings are expected to help shed light on any future work to evolve ad blocking and optimize crowdsourcing mechanisms.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"87 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91515229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Modeling BBR's Interactions with Loss-Based Congestion Control 基于损失的拥塞控制的BBR交互建模
Pub Date : 2019-10-21 DOI: 10.1145/3355369.3355604
Ranysha Ware, Matthew K. Mukerjee, S. Seshan, Justine Sherry
BBR is a new congestion control algorithm (CCA) deployed for Chromium QUIC and the Linux kernel. As the default CCA for YouTube (which commands 11+% of Internet traffic), BBR has rapidly become a major player in Internet congestion control. BBR's fairness or friendliness to other connections has recently come under scrutiny as measurements from multiple research groups have shown undesirable outcomes when BBR competes with traditional CCAs. One such outcome is a fixed, 40% proportion of link capacity consumed by a single BBR flow when competing with as many as 16 loss-based algorithms like Cubic or Reno. In this short paper, we provide the first model capturing BBR's behavior in competition with loss-based CCAs. Our model is coupled with practical experiments to validate its implications. The key lesson is this: under competition, BBR becomes window-limited by its 'in-flight cap' which then determines BBR's bandwidth consumption. By modeling the value of BBR's in-flight cap under varying network conditions, we can predict BBR's throughput when competing against Cubic flows with a median error of 5%, and against Reno with a median of 8%.
BBR是针对Chromium QUIC和Linux内核部署的一种新的拥塞控制算法(CCA)。作为YouTube的默认CCA(占互联网流量的11%以上),BBR已迅速成为互联网拥塞控制的主要参与者。BBR对其他连接的公平或友好性最近受到了审查,因为多个研究小组的测量结果表明,BBR与传统cca竞争会产生不良后果。其中一个结果是,当与多达16种基于损失的算法(如Cubic或Reno)竞争时,单个BBR流消耗的链路容量比例固定为40%。在这篇短文中,我们提供了第一个捕获BBR在与基于损失的cca竞争中的行为的模型。我们的模型与实际实验相结合,以验证其含义。关键的教训是:在竞争下,BBR受到其“飞行上限”的窗口限制,这决定了BBR的带宽消耗。通过建模BBR在不同网络条件下的飞行上限值,我们可以预测BBR在与Cubic流竞争时的吞吐量,中位数误差为5%,与Reno流竞争时的中位数误差为8%。
{"title":"Modeling BBR's Interactions with Loss-Based Congestion Control","authors":"Ranysha Ware, Matthew K. Mukerjee, S. Seshan, Justine Sherry","doi":"10.1145/3355369.3355604","DOIUrl":"https://doi.org/10.1145/3355369.3355604","url":null,"abstract":"BBR is a new congestion control algorithm (CCA) deployed for Chromium QUIC and the Linux kernel. As the default CCA for YouTube (which commands 11+% of Internet traffic), BBR has rapidly become a major player in Internet congestion control. BBR's fairness or friendliness to other connections has recently come under scrutiny as measurements from multiple research groups have shown undesirable outcomes when BBR competes with traditional CCAs. One such outcome is a fixed, 40% proportion of link capacity consumed by a single BBR flow when competing with as many as 16 loss-based algorithms like Cubic or Reno. In this short paper, we provide the first model capturing BBR's behavior in competition with loss-based CCAs. Our model is coupled with practical experiments to validate its implications. The key lesson is this: under competition, BBR becomes window-limited by its 'in-flight cap' which then determines BBR's bandwidth consumption. By modeling the value of BBR's in-flight cap under varying network conditions, we can predict BBR's throughput when competing against Cubic flows with a median error of 5%, and against Reno with a median of 8%.","PeriodicalId":20640,"journal":{"name":"Proceedings of the Internet Measurement Conference 2018","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84512810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
期刊
Proceedings of the Internet Measurement Conference 2018
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1