首页 > 最新文献

软件产业与工程最新文献

英文 中文
CORMS: a GitHub and Gerrit based hybrid code reviewer recommendation approach for modern code review CORMS:一种基于GitHub和Gerrit的混合代码审查推荐方法,用于现代代码审查
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549115
Prahar Pandya, Saurabh Tiwari
Modern Code review (MCR) techniques are widely adopted in both open-source software platforms and organizations to ensure the quality of their software products. However, the selection of reviewers for code review is cumbersome with the increasing size of development teams. The recommendation of inappropriate reviewers for code review can take more time and effort to complete the task effectively. In this paper, we extended the baseline of reviewers' recommendation framework - RevFinder - to handle issues with newly created files, retired reviewers, the external validity of results, and the accuracies of the state-of-the-art RevFinder. Our proposed hybrid approach, CORMS, works on similarity analysis to compute similarities among file paths, projects/sub-projects, author information, and prediction models to recommend reviewers based on the subject of the change. We conducted a detailed analysis on the widely used 20 projects of both Gerrit and GitHub to compare our results with RevFinder. Our results reveal that on average, CORMS, can achieve top-1, top-3, top-5, and top-10 accuracies, and Mean Reciprocal Rank (MRR) of 45.1%, 67.5%, 74.6%, 79.9% and 0.58 for the 20 projects, consequently improves the RevFinder approach by 44.9%, 34.4%, 20.8%, 12.3% and 18.4%, respectively.
现代代码审查(MCR)技术在开源软件平台和组织中被广泛采用,以确保其软件产品的质量。然而,随着开发团队规模的增加,选择代码审查人员变得很麻烦。为代码评审推荐不合适的评审人员会花费更多的时间和精力来有效地完成任务。在本文中,我们扩展了审稿人推荐框架RevFinder的基线,以处理新创建的文件、退休审稿人、结果的外部有效性以及最先进的RevFinder的准确性等问题。我们提出的混合方法CORMS,通过相似性分析来计算文件路径、项目/子项目、作者信息和预测模型之间的相似性,从而根据变更的主题推荐审稿人。我们对Gerrit和GitHub中被广泛使用的20个项目进行了详细的分析,将我们的结果与RevFinder进行比较。结果表明,平均而言,CORMS在20个项目中可以达到top-1、top-3、top-5和top-10的准确率,平均MRR分别为45.1%、67.5%、74.6%、79.9%和0.58,分别比RevFinder方法提高44.9%、34.4%、20.8%、12.3%和18.4%。
{"title":"CORMS: a GitHub and Gerrit based hybrid code reviewer recommendation approach for modern code review","authors":"Prahar Pandya, Saurabh Tiwari","doi":"10.1145/3540250.3549115","DOIUrl":"https://doi.org/10.1145/3540250.3549115","url":null,"abstract":"Modern Code review (MCR) techniques are widely adopted in both open-source software platforms and organizations to ensure the quality of their software products. However, the selection of reviewers for code review is cumbersome with the increasing size of development teams. The recommendation of inappropriate reviewers for code review can take more time and effort to complete the task effectively. In this paper, we extended the baseline of reviewers' recommendation framework - RevFinder - to handle issues with newly created files, retired reviewers, the external validity of results, and the accuracies of the state-of-the-art RevFinder. Our proposed hybrid approach, CORMS, works on similarity analysis to compute similarities among file paths, projects/sub-projects, author information, and prediction models to recommend reviewers based on the subject of the change. We conducted a detailed analysis on the widely used 20 projects of both Gerrit and GitHub to compare our results with RevFinder. Our results reveal that on average, CORMS, can achieve top-1, top-3, top-5, and top-10 accuracies, and Mean Reciprocal Rank (MRR) of 45.1%, 67.5%, 74.6%, 79.9% and 0.58 for the 20 projects, consequently improves the RevFinder approach by 44.9%, 34.4%, 20.8%, 12.3% and 18.4%, respectively.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"274 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86362064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
TraceCRL: contrastive representation learning for microservice trace analysis TraceCRL:用于微服务跟踪分析的对比表示学习
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549146
Chenxi Zhang, Xin Peng, Tong Zhou, Chaofeng Sha, Zhenghui Yan, Yiru Chen, Hong Yang
Due to the large amount and high complexity of trace data, microservice trace analysis tasks such as anomaly detection, fault diagnosis, and tail-based sampling widely adopt machine learning technology. These trace analysis approaches usually use a preprocessing step to map structured features of traces to vector representations in an ad-hoc way. Therefore, they may lose important information such as topological dependencies between service operations. In this paper, we propose TraceCRL, a trace representation learning approach based on contrastive learning and graph neural network, which can incorporate graph structured information in the downstream trace analysis tasks. Given a trace, TraceCRL constructs an operation invocation graph where nodes represent service operations and edges represent operation invocations together with predefined features for invocation status and related metrics. Based on the operation invocation graphs of traces TraceCRL uses a contrastive learning method to train a graph neural network-based model for trace representation. In particular, TraceCRL employs six trace data augmentation strategies to alleviate the problems of class collision and uniformity of representation in contrastive learning. Our experimental studies show that TraceCRL can significantly improve the performance of trace anomaly detection and offline trace sampling. It also confirms the effectiveness of the trace augmentation strategies and the efficiency of TraceCRL.
由于微服务跟踪数据量大、复杂度高,异常检测、故障诊断、基于尾部采样等微服务跟踪分析任务广泛采用机器学习技术。这些跟踪分析方法通常使用预处理步骤,以一种特别的方式将跟踪的结构化特征映射到向量表示。因此,它们可能会丢失服务操作之间的拓扑依赖关系等重要信息。本文提出了一种基于对比学习和图神经网络的轨迹表示学习方法TraceCRL,它可以将图结构信息整合到下游的轨迹分析任务中。给定一个跟踪,TraceCRL构建一个操作调用图,其中节点表示服务操作,边表示操作调用,以及调用状态和相关指标的预定义特性。TraceCRL基于轨迹的操作调用图,采用对比学习方法训练基于图神经网络的轨迹表示模型。特别是TraceCRL采用了六种跟踪数据增强策略来缓解对比学习中的类冲突和表示一致性问题。我们的实验研究表明,TraceCRL可以显著提高跟踪异常检测和离线跟踪采样的性能。验证了跟踪增强策略的有效性和TraceCRL的效率。
{"title":"TraceCRL: contrastive representation learning for microservice trace analysis","authors":"Chenxi Zhang, Xin Peng, Tong Zhou, Chaofeng Sha, Zhenghui Yan, Yiru Chen, Hong Yang","doi":"10.1145/3540250.3549146","DOIUrl":"https://doi.org/10.1145/3540250.3549146","url":null,"abstract":"Due to the large amount and high complexity of trace data, microservice trace analysis tasks such as anomaly detection, fault diagnosis, and tail-based sampling widely adopt machine learning technology. These trace analysis approaches usually use a preprocessing step to map structured features of traces to vector representations in an ad-hoc way. Therefore, they may lose important information such as topological dependencies between service operations. In this paper, we propose TraceCRL, a trace representation learning approach based on contrastive learning and graph neural network, which can incorporate graph structured information in the downstream trace analysis tasks. Given a trace, TraceCRL constructs an operation invocation graph where nodes represent service operations and edges represent operation invocations together with predefined features for invocation status and related metrics. Based on the operation invocation graphs of traces TraceCRL uses a contrastive learning method to train a graph neural network-based model for trace representation. In particular, TraceCRL employs six trace data augmentation strategies to alleviate the problems of class collision and uniformity of representation in contrastive learning. Our experimental studies show that TraceCRL can significantly improve the performance of trace anomaly detection and offline trace sampling. It also confirms the effectiveness of the trace augmentation strategies and the efficiency of TraceCRL.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88475779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automated unearthing of dangerous issue reports 自动发现危险问题报告
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549156
Shengyi Pan, Jiayuan Zhou, F. R. Côgo, Xin Xia, Lingfeng Bao, Xing Hu, Shanping Li, Ahmed E. Hassan
The coordinated vulnerability disclosure (CVD) process is commonly adopted for open source software (OSS) vulnerability management, which suggests to privately report the discovered vulnerabilities and keep relevant information secret until the official disclosure. However, in practice, due to various reasons (e.g., lacking security domain expertise or the sense of security management), many vulnerabilities are first reported via public issue reports (IRs) before its official disclosure. Such IRs are dangerous IRs, since attackers can take advantages of the leaked vulnerability information to launch zero-day attacks. It is crucial to identify such dangerous IRs at an early stage, such that OSS users can start the vulnerability remediation process earlier and OSS maintainers can timely manage the dangerous IRs. In this paper, we propose and evaluate a deep learning based approach, namely MemVul, to automatically identify dangerous IRs at the time they are reported. MemVul augments the neural networks with a memory component, which stores the external vulnerability knowledge from Common Weakness Enumeration (CWE). We rely on publicly accessible CVE-referred IRs (CIRs) to operationalize the concept of dangerous IR. We mine 3,937 CIRs distributed across 1,390 OSS projects hosted on GitHub. Evaluated under a practical scenario of high data imbalance, MemVul achieves the best trade-off between precision and recall among all baselines. In particular, the F1-score of MemVul (i.e., 0.49) improves the best performing baseline by 44%. For IRs that are predicted as CIRs but not reported to CVE, we conduct a user study to investigate their usefulness to OSS stakeholders. We observe that 82% (41 out of 50) of these IRs are security-related and 28 of them are suggested by security experts to be publicly disclosed, indicating MemVul is capable of identifying undisclosed dangerous IRs.
开源软件(OSS)漏洞管理通常采用协调漏洞披露(CVD)流程,建议对发现的漏洞进行私下报告,并对相关信息保密,直至正式披露。然而,在实践中,由于各种原因(如缺乏安全领域的专业知识或安全管理意识),许多漏洞在正式披露之前,首先通过公共问题报告(public issue report, IRs)报告。这样的ir是危险的ir,因为攻击者可以利用泄露的漏洞信息发起零日攻击。在早期阶段识别这些危险的ir是至关重要的,这样OSS用户可以更早地启动漏洞修复过程,OSS维护者可以及时管理危险的ir。在本文中,我们提出并评估了一种基于深度学习的方法,即MemVul,在报告危险ir时自动识别危险ir。MemVul在神经网络的基础上增加了一个内存组件,该组件存储来自共同弱点枚举(Common Weakness Enumeration, CWE)的外部漏洞知识。我们依靠可公开访问的cve引用IR (CIRs)来实现危险IR的概念。我们在GitHub上托管的1390个OSS项目中挖掘了3937个cir。在高度数据不平衡的实际场景下进行评估,MemVul在所有基线中实现了精度和召回率之间的最佳权衡。特别是,MemVul的f1得分(即0.49)将最佳性能基线提高了44%。对于预测为cir但未向CVE报告的ir,我们进行用户研究以调查它们对OSS涉众的有用性。我们观察到,这些ir中有82%(50个中的41个)与安全相关,其中28个由安全专家建议公开披露,这表明MemVul能够识别未公开的危险ir。
{"title":"Automated unearthing of dangerous issue reports","authors":"Shengyi Pan, Jiayuan Zhou, F. R. Côgo, Xin Xia, Lingfeng Bao, Xing Hu, Shanping Li, Ahmed E. Hassan","doi":"10.1145/3540250.3549156","DOIUrl":"https://doi.org/10.1145/3540250.3549156","url":null,"abstract":"The coordinated vulnerability disclosure (CVD) process is commonly adopted for open source software (OSS) vulnerability management, which suggests to privately report the discovered vulnerabilities and keep relevant information secret until the official disclosure. However, in practice, due to various reasons (e.g., lacking security domain expertise or the sense of security management), many vulnerabilities are first reported via public issue reports (IRs) before its official disclosure. Such IRs are dangerous IRs, since attackers can take advantages of the leaked vulnerability information to launch zero-day attacks. It is crucial to identify such dangerous IRs at an early stage, such that OSS users can start the vulnerability remediation process earlier and OSS maintainers can timely manage the dangerous IRs. In this paper, we propose and evaluate a deep learning based approach, namely MemVul, to automatically identify dangerous IRs at the time they are reported. MemVul augments the neural networks with a memory component, which stores the external vulnerability knowledge from Common Weakness Enumeration (CWE). We rely on publicly accessible CVE-referred IRs (CIRs) to operationalize the concept of dangerous IR. We mine 3,937 CIRs distributed across 1,390 OSS projects hosted on GitHub. Evaluated under a practical scenario of high data imbalance, MemVul achieves the best trade-off between precision and recall among all baselines. In particular, the F1-score of MemVul (i.e., 0.49) improves the best performing baseline by 44%. For IRs that are predicted as CIRs but not reported to CVE, we conduct a user study to investigate their usefulness to OSS stakeholders. We observe that 82% (41 out of 50) of these IRs are security-related and 28 of them are suggested by security experts to be publicly disclosed, indicating MemVul is capable of identifying undisclosed dangerous IRs.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90711624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Large-scale analysis of non-termination bugs in real-world OSS projects 对真实OSS项目中的非终止错误进行大规模分析
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549129
X. Shi, Xiaofei Xie, Yi Li, Yao Zhang, Sen Chen, Xiaohong Li
Termination is a crucial program property. Non-termination bugs can be subtle to detect and may remain hidden for long before they take effect. Many real-world programs still suffer from vast consequences (e.g., no response) caused by non-termination bugs. As a classic problem, termination proving has been studied for many years. Many termination checking tools and techniques have been developed and demonstrated effectiveness on existing well-established benchmarks. However, the capability of these tools in finding practical non-termination bugs has yet to be tested on real-world projects. To fill in this gap, in this paper, we conducted the first large-scale empirical study of non-termination bugs in real-world OSS projects. Specifically, we first devoted substantial manual efforts in collecting and analyzing 445 non-termination bugs from 3,142 GitHub commits and provided a systematic classifi-cation of the bugs based on their root causes. We constructed a new benchmark set characterizing the real-world bugs with simplified programs, including a non-termination dataset with 56 real and reproducible non-termination bugs and a termination dataset with 58 fixed programs. With the constructed benchmark, we evaluated five state-of-the-art termination analysis tools. The results show that the capabilities of the tested tools to make correct verdicts have obviously dropped compared with the existing benchmarks. Meanwhile, we identified the challenges and limitations that these tools face by analyzing the root causes of their unhandled bugs. Fi-nally, we summarized the challenges and future research directions for detecting non-termination bugs in real-world projects.
终止是程序的一个重要属性。非终止错误可能很难检测到,并且可能在生效之前很长时间内隐藏起来。许多现实世界的程序仍然遭受由非终止错误引起的巨大后果(例如,无响应)。作为一个经典问题,终止证明已经被研究了很多年。已经开发了许多终止检查工具和技术,并在现有的成熟基准上证明了它们的有效性。然而,这些工具在发现实际的非终止错误方面的能力还需要在实际项目中进行测试。为了填补这一空白,在本文中,我们对真实的OSS项目中的非终止错误进行了第一次大规模的实证研究。具体来说,我们首先投入了大量的手工工作,从3,142个GitHub提交中收集和分析了445个非终止bug,并根据它们的根本原因对这些bug进行了系统的分类。我们构建了一个新的基准集,用简化的程序来表征现实世界的bug,包括一个包含56个真实且可重复的非终止错误的非终止数据集和一个包含58个固定程序的终止数据集。通过构建的基准,我们评估了五种最先进的终止分析工具。结果表明,与现有基准测试相比,测试工具做出正确判断的能力明显下降。同时,我们通过分析这些工具未处理错误的根本原因,确定了这些工具所面临的挑战和限制。最后,我们总结了在现实项目中检测非终止bug所面临的挑战和未来的研究方向。
{"title":"Large-scale analysis of non-termination bugs in real-world OSS projects","authors":"X. Shi, Xiaofei Xie, Yi Li, Yao Zhang, Sen Chen, Xiaohong Li","doi":"10.1145/3540250.3549129","DOIUrl":"https://doi.org/10.1145/3540250.3549129","url":null,"abstract":"Termination is a crucial program property. Non-termination bugs can be subtle to detect and may remain hidden for long before they take effect. Many real-world programs still suffer from vast consequences (e.g., no response) caused by non-termination bugs. As a classic problem, termination proving has been studied for many years. Many termination checking tools and techniques have been developed and demonstrated effectiveness on existing well-established benchmarks. However, the capability of these tools in finding practical non-termination bugs has yet to be tested on real-world projects. To fill in this gap, in this paper, we conducted the first large-scale empirical study of non-termination bugs in real-world OSS projects. Specifically, we first devoted substantial manual efforts in collecting and analyzing 445 non-termination bugs from 3,142 GitHub commits and provided a systematic classifi-cation of the bugs based on their root causes. We constructed a new benchmark set characterizing the real-world bugs with simplified programs, including a non-termination dataset with 56 real and reproducible non-termination bugs and a termination dataset with 58 fixed programs. With the constructed benchmark, we evaluated five state-of-the-art termination analysis tools. The results show that the capabilities of the tested tools to make correct verdicts have obviously dropped compared with the existing benchmarks. Meanwhile, we identified the challenges and limitations that these tools face by analyzing the root causes of their unhandled bugs. Fi-nally, we summarized the challenges and future research directions for detecting non-termination bugs in real-world projects.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88550106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
JSIMutate: understanding performance results through mutations JSIMutate:通过突变理解性能结果
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3558930
Thomas Laurent, Paolo Arcaini, Catia Trubiani, Anthony Ventresque
Understanding the performance characteristics of software systems is particular relevant when looking at design alternatives. However, it is a very challenging problem, due to the complexity of interpreting the role and incidence of the different system elements on performance metrics of interest, such as system response time or resources utilisation. This work introduces JSIMutate, a tool that makes use of queueing network performance models and enables the analysis of mutations of a model reflecting possible design changes to support designers in identifying the model elements that contribute to improving or worsening the system's performance.
了解软件系统的性能特征在考虑设计方案时尤为重要。然而,这是一个非常具有挑战性的问题,因为解释不同系统元素在感兴趣的性能度量(如系统响应时间或资源利用率)上的作用和关联非常复杂。这项工作介绍了JSIMutate,这是一个工具,它利用排队网络性能模型,并能够分析反映可能的设计更改的模型突变,以支持设计人员识别有助于改善或恶化系统性能的模型元素。
{"title":"JSIMutate: understanding performance results through mutations","authors":"Thomas Laurent, Paolo Arcaini, Catia Trubiani, Anthony Ventresque","doi":"10.1145/3540250.3558930","DOIUrl":"https://doi.org/10.1145/3540250.3558930","url":null,"abstract":"Understanding the performance characteristics of software systems is particular relevant when looking at design alternatives. However, it is a very challenging problem, due to the complexity of interpreting the role and incidence of the different system elements on performance metrics of interest, such as system response time or resources utilisation. This work introduces JSIMutate, a tool that makes use of queueing network performance models and enables the analysis of mutations of a model reflecting possible design changes to support designers in identifying the model elements that contribute to improving or worsening the system's performance.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"133 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86394281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to better utilize code graphs in semantic code search? 如何更好地利用代码图在语义代码搜索?
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549087
Yucen Shi, Ying Yin, Zhengkui Wang, David Lo, Tao Zhang, Xin Xia, Yuhai Zhao, Bowen Xu
Semantic code search greatly facilitates software reuse, which enables users to find code snippets highly matching user-specified natural language queries. Due to the rich expressive power of code graphs (e.g., control-flow graph and program dependency graph), both of the two mainstream research works (i.e., multi-modal models and pre-trained models) have attempted to incorporate code graphs for code modelling. However, they still have some limitations: First, there is still much room for improvement in terms of search effectiveness. Second, they have not fully considered the unique features of code graphs. In this paper, we propose a Graph-to-Sequence Converter, namely G2SC. Through converting the code graphs into lossless sequences, G2SC enables to address the problem of small graph learning using sequence feature learning and capture both the edges and nodes attribute information of code graphs. Thus, the effectiveness of code search can be greatly improved. In particular, G2SC first converts the code graph into a unique corresponding node sequence by a specific graph traversal strategy. Then, it gets a statement sequence by replacing each node with its corresponding statement. A set of carefully designed graph traversal strategies guarantee that the process is one-to-one and reversible. G2SC enables capturing rich semantic relationships (i.e., control flow, data flow, node/relationship properties) and provides learning model-friendly data transformation. It can be flexibly integrated with existing models to better utilize the code graphs. As a proof-of-concept application, we present two G2SC enabled models: GSMM (G2SC enabled multi-modal model) and GSCodeBERT (G2SC enabled CodeBERT model). Extensive experiment results on two real large-scale datasets demonstrate that GSMM and GSCodeBERT can greatly improve the state-of-the-art models MMAN and GraphCodeBERT by 92% and 22% on R@1, and 63% and 11.5% on MRR, respectively.
语义代码搜索极大地促进了软件的重用,使用户能够找到与用户指定的自然语言查询高度匹配的代码片段。由于代码图(如控制流图和程序依赖图)丰富的表达能力,两种主流的研究工作(即多模态模型和预训练模型)都试图将代码图纳入代码建模中。然而,它们仍然存在一些局限性:首先,在搜索效率方面仍有很大的改进空间。其次,他们没有充分考虑代码图的独特特性。在本文中,我们提出了一个图-序列转换器,即G2SC。通过将代码图转换为无损序列,G2SC能够利用序列特征学习解决小图学习问题,并捕获代码图的边和节点属性信息。因此,可以大大提高代码搜索的效率。特别是,G2SC首先通过特定的图遍历策略将代码图转换为唯一的对应节点序列。然后,它通过用相应的语句替换每个节点来获得语句序列。一组精心设计的图遍历策略保证了过程是一对一的和可逆的。G2SC支持捕获丰富的语义关系(即控制流、数据流、节点/关系属性),并提供学习模型友好的数据转换。它可以灵活地与现有模型集成,以更好地利用代码图。作为概念验证应用,我们提出了两个支持G2SC的模型:GSMM(支持G2SC的多模态模型)和GSCodeBERT(支持G2SC的CodeBERT模型)。在两个真实大规模数据集上的大量实验结果表明,GSMM和GSCodeBERT可以在R@1上大大提高最先进模型MMAN和GraphCodeBERT的92%和22%,MRR分别提高63%和11.5%。
{"title":"How to better utilize code graphs in semantic code search?","authors":"Yucen Shi, Ying Yin, Zhengkui Wang, David Lo, Tao Zhang, Xin Xia, Yuhai Zhao, Bowen Xu","doi":"10.1145/3540250.3549087","DOIUrl":"https://doi.org/10.1145/3540250.3549087","url":null,"abstract":"Semantic code search greatly facilitates software reuse, which enables users to find code snippets highly matching user-specified natural language queries. Due to the rich expressive power of code graphs (e.g., control-flow graph and program dependency graph), both of the two mainstream research works (i.e., multi-modal models and pre-trained models) have attempted to incorporate code graphs for code modelling. However, they still have some limitations: First, there is still much room for improvement in terms of search effectiveness. Second, they have not fully considered the unique features of code graphs. In this paper, we propose a Graph-to-Sequence Converter, namely G2SC. Through converting the code graphs into lossless sequences, G2SC enables to address the problem of small graph learning using sequence feature learning and capture both the edges and nodes attribute information of code graphs. Thus, the effectiveness of code search can be greatly improved. In particular, G2SC first converts the code graph into a unique corresponding node sequence by a specific graph traversal strategy. Then, it gets a statement sequence by replacing each node with its corresponding statement. A set of carefully designed graph traversal strategies guarantee that the process is one-to-one and reversible. G2SC enables capturing rich semantic relationships (i.e., control flow, data flow, node/relationship properties) and provides learning model-friendly data transformation. It can be flexibly integrated with existing models to better utilize the code graphs. As a proof-of-concept application, we present two G2SC enabled models: GSMM (G2SC enabled multi-modal model) and GSCodeBERT (G2SC enabled CodeBERT model). Extensive experiment results on two real large-scale datasets demonstrate that GSMM and GSCodeBERT can greatly improve the state-of-the-art models MMAN and GraphCodeBERT by 92% and 22% on R@1, and 63% and 11.5% on MRR, respectively.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81342607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Unite: an adapter for transforming analysis tools to web services via OSLC Unite:通过OSLC将分析工具转换为web服务的适配器
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3558939
O. Vašíček, Jan Fiedor, Tomas Kratochvila, B. Krena, A. Smrčka, Tomáš Vojnar
This paper describes Unite, a new tool intended as an adapter for transforming non-interactive command-line analysis tools to OSLC-compliant web services. Unite aims to make such tools easier to adopt and more convenient to use by allowing them to be accessible, both locally and remotely, in a unified way and to be easily integrated into various development environments. Open Services for Lifecycle Collaboration (OSLC) is an open standard for tool integration and was chosen for this task due to its robustness, extensibility, support of data from various domains, and its growing popularity. The work is motivated by allowing existing analysis tools to be more widely used with a strong emphasis on widening their industrial usage. We have implemented Unite and used it with multiple existing static as well as dynamic analysis and verification tools, and then successfully deployed it internationally in the industry to automate verification tasks for development teams in Honeywell. We discuss Honeywell's experience with using Unite and with OSLC in general. Moreover, we also provide the Unite Client (UniC) for Eclipse to allow users to easily run various analysis tools directly from the Eclipse IDE.
本文描述了Unite,这是一个新的工具,用于将非交互式命令行分析工具转换为oslc兼容的web服务。Unite的目标是使这些工具更容易采用,更方便使用,允许它们以统一的方式在本地和远程访问,并轻松集成到各种开发环境中。生命周期协作的开放服务(OSLC)是工具集成的开放标准,由于它的健壮性、可扩展性、对来自不同领域的数据的支持,以及它的日益流行,它被选择用于这项任务。这项工作的动机是允许更广泛地使用现有的分析工具,并强调扩大它们的工业用途。我们已经实现了Unite,并将其与多种现有的静态和动态分析和验证工具一起使用,然后成功地将其部署在国际行业中,为霍尼韦尔的开发团队自动化验证任务。我们讨论了霍尼韦尔在使用Unite和OSLC方面的经验。此外,我们还为Eclipse提供了unity客户端(UniC),使用户可以直接从Eclipse IDE中轻松运行各种分析工具。
{"title":"Unite: an adapter for transforming analysis tools to web services via OSLC","authors":"O. Vašíček, Jan Fiedor, Tomas Kratochvila, B. Krena, A. Smrčka, Tomáš Vojnar","doi":"10.1145/3540250.3558939","DOIUrl":"https://doi.org/10.1145/3540250.3558939","url":null,"abstract":"This paper describes Unite, a new tool intended as an adapter for transforming non-interactive command-line analysis tools to OSLC-compliant web services. Unite aims to make such tools easier to adopt and more convenient to use by allowing them to be accessible, both locally and remotely, in a unified way and to be easily integrated into various development environments. Open Services for Lifecycle Collaboration (OSLC) is an open standard for tool integration and was chosen for this task due to its robustness, extensibility, support of data from various domains, and its growing popularity. The work is motivated by allowing existing analysis tools to be more widely used with a strong emphasis on widening their industrial usage. We have implemented Unite and used it with multiple existing static as well as dynamic analysis and verification tools, and then successfully deployed it internationally in the industry to automate verification tasks for development teams in Honeywell. We discuss Honeywell's experience with using Unite and with OSLC in general. Moreover, we also provide the Unite Client (UniC) for Eclipse to allow users to easily run various analysis tools directly from the Eclipse IDE.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81349700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AccessiText: automated detection of text accessibility issues in Android apps AccessiText:自动检测Android应用程序中的文本可访问性问题
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549118
Abdulaziz Alshayban, S. Malek
For 15% of the world population with disabilities, accessibility is arguably the most critical software quality attribute. The growing reliance of users with disability on mobile apps to complete their day-to-day tasks further stresses the need for accessible software. Mobile operating systems, such as iOS and Android, provide various integrated assistive services to help individuals with disabilities perform tasks that could otherwise be difficult or not possible. However, for these assistive services to work correctly, developers have to support them in their app by following a set of best practices and accessibility guidelines. Text Scaling Assistive Service (TSAS) is utilized by people with low vision, to increase the text size and make apps accessible to them. However, the use of TSAS with incompatible apps can result in unexpected behavior introducing accessibility barriers to users. This paper presents approach, an automated testing technique for text accessibility issues arising from incompatibility between apps and TSAS. As a first step, we identify five different types of text accessibility by analyzing more than 600 candidate issues reported by users in (i) app reviews for Android and iOS, and (ii) Twitter data collected from public Twitter accounts. To automatically detect such issues, approach utilizes the UI screenshots and various metadata information extracted using dynamic analysis, and then applies various heuristics informed by the different types of text accessibility issues identified earlier. Evaluation of approach on 30 real-world Android apps corroborates its effectiveness by achieving 88.27% precision and 95.76% recall on average in detecting text accessibility issues.
对于世界上15%的残疾人来说,可访问性可以说是最关键的软件质量属性。残疾用户越来越依赖移动应用程序来完成日常任务,这进一步强调了对无障碍软件的需求。移动操作系统,如iOS和Android,提供各种综合辅助服务,帮助残疾人完成原本很难或不可能完成的任务。然而,为了使这些辅助服务正常工作,开发者必须遵循一套最佳实践和可访问性指南,在应用中支持它们。文本缩放辅助服务(TSAS)用于弱视人群,以增加文本大小并使应用程序易于访问。然而,在不兼容的应用程序中使用TSAS可能会导致意想不到的行为,给用户带来可访问性障碍。本文提出了一种方法,一种自动测试技术,用于解决由于应用程序和TSAS之间不兼容而引起的文本可访问性问题。作为第一步,我们通过分析用户在(i) Android和iOS应用评论中报告的600多个候选问题,以及(ii)从公共Twitter账户收集的Twitter数据,确定了五种不同类型的文本可访问性。为了自动检测这些问题,approach利用UI屏幕截图和使用动态分析提取的各种元数据信息,然后应用由前面确定的不同类型的文本可访问性问题提供的各种启发式方法。通过对30个真实Android应用的评估,证实了该方法在检测文本可访问性问题上的有效性,平均准确率为88.27%,召回率为95.76%。
{"title":"AccessiText: automated detection of text accessibility issues in Android apps","authors":"Abdulaziz Alshayban, S. Malek","doi":"10.1145/3540250.3549118","DOIUrl":"https://doi.org/10.1145/3540250.3549118","url":null,"abstract":"For 15% of the world population with disabilities, accessibility is arguably the most critical software quality attribute. The growing reliance of users with disability on mobile apps to complete their day-to-day tasks further stresses the need for accessible software. Mobile operating systems, such as iOS and Android, provide various integrated assistive services to help individuals with disabilities perform tasks that could otherwise be difficult or not possible. However, for these assistive services to work correctly, developers have to support them in their app by following a set of best practices and accessibility guidelines. Text Scaling Assistive Service (TSAS) is utilized by people with low vision, to increase the text size and make apps accessible to them. However, the use of TSAS with incompatible apps can result in unexpected behavior introducing accessibility barriers to users. This paper presents approach, an automated testing technique for text accessibility issues arising from incompatibility between apps and TSAS. As a first step, we identify five different types of text accessibility by analyzing more than 600 candidate issues reported by users in (i) app reviews for Android and iOS, and (ii) Twitter data collected from public Twitter accounts. To automatically detect such issues, approach utilizes the UI screenshots and various metadata information extracted using dynamic analysis, and then applies various heuristics informed by the different types of text accessibility issues identified earlier. Evaluation of approach on 30 real-world Android apps corroborates its effectiveness by achieving 88.27% precision and 95.76% recall on average in detecting text accessibility issues.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81829143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Peahen: fast and precise static deadlock detection via context reduction pehen:通过上下文还原快速精确的静态死锁检测
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549110
Yuandao Cai, Chengfeng Ye, Qingkai Shi, Charles Zhang
Deadlocks still severely inflict reliability and security issues upon software systems of the modern age. Worse still, as we note, in prior static deadlock detectors, good precision does not go hand-in-hand with high scalability --- their approaches are either context-insensitive, thereby engendering many false positives, or suffer from the calling context explosion to reach context-sensitive, thus compromising good efficiency. In this paper, we advocate Peahen, geared towards precise yet also scalable static deadlock detection. At its crux, Peahen decomposes the computational effort for embracing high precision into two cooperative analysis stages: (i) context-insensitive lock-graph construction, which selectively encodes the essential lock-acquisition information on each edge, and (ii) three precise yet lazy refinements, which incorporate such edge information into progressively refining the deadlock cycles in the lock graph only for a few interesting calling contexts. Our extensive experiments yield promising results: Peahen dramatically out-performs the state-of-the-art tools on accuracy without losing scalability; it can efficiently check million-line systems at a low false positive rate; and it has uncovered many confirmed deadlocks in dozens of mature open-source systems.
死锁仍然严重地给现代软件系统带来可靠性和安全性问题。更糟糕的是,正如我们注意到的那样,在以前的静态死锁检测器中,良好的精度与高可伸缩性并不同步——它们的方法要么是上下文不敏感的,从而产生许多误报,要么遭受调用上下文爆炸的影响,以达到上下文敏感,从而损害良好的效率。在本文中,我们提倡pehen,它面向精确且可扩展的静态死锁检测。在其核心,pehen将实现高精度的计算工作分解为两个协作分析阶段:(i)上下文不敏感的锁图构建,它有选择地对每个边缘上的基本锁获取信息进行编码;(ii)三个精确但懒惰的改进,它将这些边缘信息合并到锁图中,仅针对几个有趣的调用上下文逐步改进死锁周期。我们广泛的实验产生了有希望的结果:pehen在准确性上显著优于最先进的工具,而不会失去可扩展性;它能以较低的误报率有效地检测数百万在线系统;它还在几十个成熟的开源系统中发现了许多已确认的死锁。
{"title":"Peahen: fast and precise static deadlock detection via context reduction","authors":"Yuandao Cai, Chengfeng Ye, Qingkai Shi, Charles Zhang","doi":"10.1145/3540250.3549110","DOIUrl":"https://doi.org/10.1145/3540250.3549110","url":null,"abstract":"Deadlocks still severely inflict reliability and security issues upon software systems of the modern age. Worse still, as we note, in prior static deadlock detectors, good precision does not go hand-in-hand with high scalability --- their approaches are either context-insensitive, thereby engendering many false positives, or suffer from the calling context explosion to reach context-sensitive, thus compromising good efficiency. In this paper, we advocate Peahen, geared towards precise yet also scalable static deadlock detection. At its crux, Peahen decomposes the computational effort for embracing high precision into two cooperative analysis stages: (i) context-insensitive lock-graph construction, which selectively encodes the essential lock-acquisition information on each edge, and (ii) three precise yet lazy refinements, which incorporate such edge information into progressively refining the deadlock cycles in the lock graph only for a few interesting calling contexts. Our extensive experiments yield promising results: Peahen dramatically out-performs the state-of-the-art tools on accuracy without losing scalability; it can efficiently check million-line systems at a low false positive rate; and it has uncovered many confirmed deadlocks in dozens of mature open-source systems.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89369826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
AgileCtrl: a self-adaptive framework for configuration tuning AgileCtrl:用于配置调优的自适应框架
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549136
Shu Wang, Henry Hoffmann, Shan Lu
Software systems increasingly expose performance-sensitive configuration parameters, or PerfConfs, to users. Unfortunately, the right settings of these PerfConfs are difficult to decide and often change at run time. To address this problem, prior research has proposed self-adaptive frameworks that automatically monitor the software’s behavior and dynamically tune configurations to provide the desired performance despite dynamic changes. However, these frameworks often require configuration themselves; sometimes explicitly in the form of additional parameters, sometimes implicitly in the form of training. This paper proposes a new framework, AgileCtrl, that eliminates the need of configuration for a large family of control-based self-adaptive frameworks. AgileCtrl’s key insight is to not just monitor the original software, but additionally to monitor its adaptations and reconfigure itself when its internal adaptation mechanisms are not meeting software requirements. We evaluate AgileCtrl by comparing against recent control-based approaches to self-adaptation that require user configuration. Across a number of case studies, we find AgileCtrl withstands model errors up to 106×, saves the system from performance oscillation and crashes, and improves the performance up to 53%. It also auto-adjusts improper performance goals while improving the performance by 50%.
软件系统越来越多地向用户公开对性能敏感的配置参数(perfconf)。不幸的是,很难确定这些perfconf的正确设置,并且经常在运行时更改。为了解决这个问题,之前的研究提出了自适应框架,自动监控软件的行为并动态调整配置,以在动态变化的情况下提供所需的性能。然而,这些框架本身通常需要配置;有时以附加参数的形式显式地进行,有时以训练的形式隐式地进行。本文提出了一个新的框架,AgileCtrl,它消除了对大量基于控件的自适应框架的配置需求。AgileCtrl的关键洞察不仅是监视原始软件,而且还要监视其适应性,并在其内部适应性机制不满足软件需求时重新配置自身。我们通过比较最近需要用户配置的基于控件的自适应方法来评估AgileCtrl。在许多案例研究中,我们发现AgileCtrl可以承受高达106倍的模型误差,使系统免于性能波动和崩溃,并将性能提高了53%。它还可以自动调整不适当的性能目标,同时将性能提高50%。
{"title":"AgileCtrl: a self-adaptive framework for configuration tuning","authors":"Shu Wang, Henry Hoffmann, Shan Lu","doi":"10.1145/3540250.3549136","DOIUrl":"https://doi.org/10.1145/3540250.3549136","url":null,"abstract":"Software systems increasingly expose performance-sensitive configuration parameters, or PerfConfs, to users. Unfortunately, the right settings of these PerfConfs are difficult to decide and often change at run time. To address this problem, prior research has proposed self-adaptive frameworks that automatically monitor the software’s behavior and dynamically tune configurations to provide the desired performance despite dynamic changes. However, these frameworks often require configuration themselves; sometimes explicitly in the form of additional parameters, sometimes implicitly in the form of training. This paper proposes a new framework, AgileCtrl, that eliminates the need of configuration for a large family of control-based self-adaptive frameworks. AgileCtrl’s key insight is to not just monitor the original software, but additionally to monitor its adaptations and reconfigure itself when its internal adaptation mechanisms are not meeting software requirements. We evaluate AgileCtrl by comparing against recent control-based approaches to self-adaptation that require user configuration. Across a number of case studies, we find AgileCtrl withstands model errors up to 106×, saves the system from performance oscillation and crashes, and improves the performance up to 53%. It also auto-adjusts improper performance goals while improving the performance by 50%.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87487617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
软件产业与工程
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1