首页 > 最新文献

软件产业与工程最新文献

英文 中文
CORMS: a GitHub and Gerrit based hybrid code reviewer recommendation approach for modern code review CORMS:一种基于GitHub和Gerrit的混合代码审查推荐方法,用于现代代码审查
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549115
Prahar Pandya, Saurabh Tiwari
Modern Code review (MCR) techniques are widely adopted in both open-source software platforms and organizations to ensure the quality of their software products. However, the selection of reviewers for code review is cumbersome with the increasing size of development teams. The recommendation of inappropriate reviewers for code review can take more time and effort to complete the task effectively. In this paper, we extended the baseline of reviewers' recommendation framework - RevFinder - to handle issues with newly created files, retired reviewers, the external validity of results, and the accuracies of the state-of-the-art RevFinder. Our proposed hybrid approach, CORMS, works on similarity analysis to compute similarities among file paths, projects/sub-projects, author information, and prediction models to recommend reviewers based on the subject of the change. We conducted a detailed analysis on the widely used 20 projects of both Gerrit and GitHub to compare our results with RevFinder. Our results reveal that on average, CORMS, can achieve top-1, top-3, top-5, and top-10 accuracies, and Mean Reciprocal Rank (MRR) of 45.1%, 67.5%, 74.6%, 79.9% and 0.58 for the 20 projects, consequently improves the RevFinder approach by 44.9%, 34.4%, 20.8%, 12.3% and 18.4%, respectively.
现代代码审查(MCR)技术在开源软件平台和组织中被广泛采用,以确保其软件产品的质量。然而,随着开发团队规模的增加,选择代码审查人员变得很麻烦。为代码评审推荐不合适的评审人员会花费更多的时间和精力来有效地完成任务。在本文中,我们扩展了审稿人推荐框架RevFinder的基线,以处理新创建的文件、退休审稿人、结果的外部有效性以及最先进的RevFinder的准确性等问题。我们提出的混合方法CORMS,通过相似性分析来计算文件路径、项目/子项目、作者信息和预测模型之间的相似性,从而根据变更的主题推荐审稿人。我们对Gerrit和GitHub中被广泛使用的20个项目进行了详细的分析,将我们的结果与RevFinder进行比较。结果表明,平均而言,CORMS在20个项目中可以达到top-1、top-3、top-5和top-10的准确率,平均MRR分别为45.1%、67.5%、74.6%、79.9%和0.58,分别比RevFinder方法提高44.9%、34.4%、20.8%、12.3%和18.4%。
{"title":"CORMS: a GitHub and Gerrit based hybrid code reviewer recommendation approach for modern code review","authors":"Prahar Pandya, Saurabh Tiwari","doi":"10.1145/3540250.3549115","DOIUrl":"https://doi.org/10.1145/3540250.3549115","url":null,"abstract":"Modern Code review (MCR) techniques are widely adopted in both open-source software platforms and organizations to ensure the quality of their software products. However, the selection of reviewers for code review is cumbersome with the increasing size of development teams. The recommendation of inappropriate reviewers for code review can take more time and effort to complete the task effectively. In this paper, we extended the baseline of reviewers' recommendation framework - RevFinder - to handle issues with newly created files, retired reviewers, the external validity of results, and the accuracies of the state-of-the-art RevFinder. Our proposed hybrid approach, CORMS, works on similarity analysis to compute similarities among file paths, projects/sub-projects, author information, and prediction models to recommend reviewers based on the subject of the change. We conducted a detailed analysis on the widely used 20 projects of both Gerrit and GitHub to compare our results with RevFinder. Our results reveal that on average, CORMS, can achieve top-1, top-3, top-5, and top-10 accuracies, and Mean Reciprocal Rank (MRR) of 45.1%, 67.5%, 74.6%, 79.9% and 0.58 for the 20 projects, consequently improves the RevFinder approach by 44.9%, 34.4%, 20.8%, 12.3% and 18.4%, respectively.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"274 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86362064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
TraceCRL: contrastive representation learning for microservice trace analysis TraceCRL:用于微服务跟踪分析的对比表示学习
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549146
Chenxi Zhang, Xin Peng, Tong Zhou, Chaofeng Sha, Zhenghui Yan, Yiru Chen, Hong Yang
Due to the large amount and high complexity of trace data, microservice trace analysis tasks such as anomaly detection, fault diagnosis, and tail-based sampling widely adopt machine learning technology. These trace analysis approaches usually use a preprocessing step to map structured features of traces to vector representations in an ad-hoc way. Therefore, they may lose important information such as topological dependencies between service operations. In this paper, we propose TraceCRL, a trace representation learning approach based on contrastive learning and graph neural network, which can incorporate graph structured information in the downstream trace analysis tasks. Given a trace, TraceCRL constructs an operation invocation graph where nodes represent service operations and edges represent operation invocations together with predefined features for invocation status and related metrics. Based on the operation invocation graphs of traces TraceCRL uses a contrastive learning method to train a graph neural network-based model for trace representation. In particular, TraceCRL employs six trace data augmentation strategies to alleviate the problems of class collision and uniformity of representation in contrastive learning. Our experimental studies show that TraceCRL can significantly improve the performance of trace anomaly detection and offline trace sampling. It also confirms the effectiveness of the trace augmentation strategies and the efficiency of TraceCRL.
由于微服务跟踪数据量大、复杂度高,异常检测、故障诊断、基于尾部采样等微服务跟踪分析任务广泛采用机器学习技术。这些跟踪分析方法通常使用预处理步骤,以一种特别的方式将跟踪的结构化特征映射到向量表示。因此,它们可能会丢失服务操作之间的拓扑依赖关系等重要信息。本文提出了一种基于对比学习和图神经网络的轨迹表示学习方法TraceCRL,它可以将图结构信息整合到下游的轨迹分析任务中。给定一个跟踪,TraceCRL构建一个操作调用图,其中节点表示服务操作,边表示操作调用,以及调用状态和相关指标的预定义特性。TraceCRL基于轨迹的操作调用图,采用对比学习方法训练基于图神经网络的轨迹表示模型。特别是TraceCRL采用了六种跟踪数据增强策略来缓解对比学习中的类冲突和表示一致性问题。我们的实验研究表明,TraceCRL可以显著提高跟踪异常检测和离线跟踪采样的性能。验证了跟踪增强策略的有效性和TraceCRL的效率。
{"title":"TraceCRL: contrastive representation learning for microservice trace analysis","authors":"Chenxi Zhang, Xin Peng, Tong Zhou, Chaofeng Sha, Zhenghui Yan, Yiru Chen, Hong Yang","doi":"10.1145/3540250.3549146","DOIUrl":"https://doi.org/10.1145/3540250.3549146","url":null,"abstract":"Due to the large amount and high complexity of trace data, microservice trace analysis tasks such as anomaly detection, fault diagnosis, and tail-based sampling widely adopt machine learning technology. These trace analysis approaches usually use a preprocessing step to map structured features of traces to vector representations in an ad-hoc way. Therefore, they may lose important information such as topological dependencies between service operations. In this paper, we propose TraceCRL, a trace representation learning approach based on contrastive learning and graph neural network, which can incorporate graph structured information in the downstream trace analysis tasks. Given a trace, TraceCRL constructs an operation invocation graph where nodes represent service operations and edges represent operation invocations together with predefined features for invocation status and related metrics. Based on the operation invocation graphs of traces TraceCRL uses a contrastive learning method to train a graph neural network-based model for trace representation. In particular, TraceCRL employs six trace data augmentation strategies to alleviate the problems of class collision and uniformity of representation in contrastive learning. Our experimental studies show that TraceCRL can significantly improve the performance of trace anomaly detection and offline trace sampling. It also confirms the effectiveness of the trace augmentation strategies and the efficiency of TraceCRL.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88475779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automated unearthing of dangerous issue reports 自动发现危险问题报告
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549156
Shengyi Pan, Jiayuan Zhou, F. R. Côgo, Xin Xia, Lingfeng Bao, Xing Hu, Shanping Li, Ahmed E. Hassan
The coordinated vulnerability disclosure (CVD) process is commonly adopted for open source software (OSS) vulnerability management, which suggests to privately report the discovered vulnerabilities and keep relevant information secret until the official disclosure. However, in practice, due to various reasons (e.g., lacking security domain expertise or the sense of security management), many vulnerabilities are first reported via public issue reports (IRs) before its official disclosure. Such IRs are dangerous IRs, since attackers can take advantages of the leaked vulnerability information to launch zero-day attacks. It is crucial to identify such dangerous IRs at an early stage, such that OSS users can start the vulnerability remediation process earlier and OSS maintainers can timely manage the dangerous IRs. In this paper, we propose and evaluate a deep learning based approach, namely MemVul, to automatically identify dangerous IRs at the time they are reported. MemVul augments the neural networks with a memory component, which stores the external vulnerability knowledge from Common Weakness Enumeration (CWE). We rely on publicly accessible CVE-referred IRs (CIRs) to operationalize the concept of dangerous IR. We mine 3,937 CIRs distributed across 1,390 OSS projects hosted on GitHub. Evaluated under a practical scenario of high data imbalance, MemVul achieves the best trade-off between precision and recall among all baselines. In particular, the F1-score of MemVul (i.e., 0.49) improves the best performing baseline by 44%. For IRs that are predicted as CIRs but not reported to CVE, we conduct a user study to investigate their usefulness to OSS stakeholders. We observe that 82% (41 out of 50) of these IRs are security-related and 28 of them are suggested by security experts to be publicly disclosed, indicating MemVul is capable of identifying undisclosed dangerous IRs.
开源软件(OSS)漏洞管理通常采用协调漏洞披露(CVD)流程,建议对发现的漏洞进行私下报告,并对相关信息保密,直至正式披露。然而,在实践中,由于各种原因(如缺乏安全领域的专业知识或安全管理意识),许多漏洞在正式披露之前,首先通过公共问题报告(public issue report, IRs)报告。这样的ir是危险的ir,因为攻击者可以利用泄露的漏洞信息发起零日攻击。在早期阶段识别这些危险的ir是至关重要的,这样OSS用户可以更早地启动漏洞修复过程,OSS维护者可以及时管理危险的ir。在本文中,我们提出并评估了一种基于深度学习的方法,即MemVul,在报告危险ir时自动识别危险ir。MemVul在神经网络的基础上增加了一个内存组件,该组件存储来自共同弱点枚举(Common Weakness Enumeration, CWE)的外部漏洞知识。我们依靠可公开访问的cve引用IR (CIRs)来实现危险IR的概念。我们在GitHub上托管的1390个OSS项目中挖掘了3937个cir。在高度数据不平衡的实际场景下进行评估,MemVul在所有基线中实现了精度和召回率之间的最佳权衡。特别是,MemVul的f1得分(即0.49)将最佳性能基线提高了44%。对于预测为cir但未向CVE报告的ir,我们进行用户研究以调查它们对OSS涉众的有用性。我们观察到,这些ir中有82%(50个中的41个)与安全相关,其中28个由安全专家建议公开披露,这表明MemVul能够识别未公开的危险ir。
{"title":"Automated unearthing of dangerous issue reports","authors":"Shengyi Pan, Jiayuan Zhou, F. R. Côgo, Xin Xia, Lingfeng Bao, Xing Hu, Shanping Li, Ahmed E. Hassan","doi":"10.1145/3540250.3549156","DOIUrl":"https://doi.org/10.1145/3540250.3549156","url":null,"abstract":"The coordinated vulnerability disclosure (CVD) process is commonly adopted for open source software (OSS) vulnerability management, which suggests to privately report the discovered vulnerabilities and keep relevant information secret until the official disclosure. However, in practice, due to various reasons (e.g., lacking security domain expertise or the sense of security management), many vulnerabilities are first reported via public issue reports (IRs) before its official disclosure. Such IRs are dangerous IRs, since attackers can take advantages of the leaked vulnerability information to launch zero-day attacks. It is crucial to identify such dangerous IRs at an early stage, such that OSS users can start the vulnerability remediation process earlier and OSS maintainers can timely manage the dangerous IRs. In this paper, we propose and evaluate a deep learning based approach, namely MemVul, to automatically identify dangerous IRs at the time they are reported. MemVul augments the neural networks with a memory component, which stores the external vulnerability knowledge from Common Weakness Enumeration (CWE). We rely on publicly accessible CVE-referred IRs (CIRs) to operationalize the concept of dangerous IR. We mine 3,937 CIRs distributed across 1,390 OSS projects hosted on GitHub. Evaluated under a practical scenario of high data imbalance, MemVul achieves the best trade-off between precision and recall among all baselines. In particular, the F1-score of MemVul (i.e., 0.49) improves the best performing baseline by 44%. For IRs that are predicted as CIRs but not reported to CVE, we conduct a user study to investigate their usefulness to OSS stakeholders. We observe that 82% (41 out of 50) of these IRs are security-related and 28 of them are suggested by security experts to be publicly disclosed, indicating MemVul is capable of identifying undisclosed dangerous IRs.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90711624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Detecting Simulink compiler bugs via controllable zombie blocks mutation 通过可控僵尸块突变检测Simulink编译器bug
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549159
Shikai Guo, He Jiang, Zhihao Xu, Xiaochen Li, Zhilei Ren, Zhide Zhou, Rong Chen
As a popular Cyber-Physical System (CPS) development tool chain, MathWorks Simulink is widely used to prototype CPS models in safety-critical applications, e.g., aerospace and healthcare. It is crucial to ensure the correctness and reliability of Simulink compiler (i.e., the compiler module of Simulink) in practice since all CPS models depend on compilation. However, Simulink compiler testing is challenging due to millions of lines of source code and the lack of the complete formal language specification. Although several methods have been proposed to automatically test Simulink compiler, there still remains two challenges to be tackled, namely the limited variant space and the insufficient mutation diversity. To address these challenges, we propose COMBAT, a new differential testing method for Simulink compiler testing. COMBAT includes an EMI (Equivalence Modulo Input) mutation component and a diverse variant generation component. The EMI mutation component inserts assertion statements (e.g., If /While blocks) at arbitrary points of the seed CPS model. These statements break each insertion point into true and false branches. Then, COMBAT feeds all the data passed through the insertion point into the true branch to preserve the equivalence of CPS variants. In such a way, the body of the false branch could be viewed as a new variant space, thus addressing the first challenge. The diverse variant generation component uses Markov chain Monte Carlo optimization to sample the seed CPS model and generate complex mutations of long sequences of blocks in the variant space, thus addressing the second challenge. Experiments demonstrate that COMBAT significantly outperforms the state-of-the-art approaches in Simulink compiler testing. Within five months, COMBAT has reported 16 valid bugs for Simulink R2021b, of which 11 bugs have been confirmed as new bugs by MathWorks Support.
作为一种流行的网络物理系统(CPS)开发工具链,MathWorks Simulink被广泛用于安全关键应用(例如航空航天和医疗保健)的CPS模型原型。由于所有CPS模型都依赖于编译,因此在实践中确保Simulink编译器(即Simulink的编译器模块)的正确性和可靠性至关重要。然而,由于数百万行源代码和缺乏完整的正式语言规范,Simulink编译器测试是具有挑战性的。虽然已经提出了几种自动测试Simulink编译器的方法,但仍然存在变异空间有限和变异多样性不足的问题。为了解决这些挑战,我们提出了COMBAT,一种新的用于Simulink编译器测试的差分测试方法。COMBAT包括等效模输入(EMI)突变组件和多种变体生成组件。EMI突变组件在种子CPS模型的任意点插入断言语句(例如,If /While块)。这些语句将每个插入点分解为真和假分支。然后,COMBAT将所有通过插入点的数据馈送到真正的分支中,以保持CPS变体的等价性。通过这种方式,假分支的主体可以被视为一个新的变体空间,从而解决了第一个挑战。多元变体生成组件利用马尔可夫链蒙特卡罗优化对种子CPS模型进行采样,并在变体空间中生成长序列块的复杂突变,从而解决了第二个挑战。实验表明,COMBAT在Simulink编译器测试中明显优于最先进的方法。在五个月内,COMBAT报告了Simulink R2021b的16个有效bug,其中11个bug已被MathWorks Support确认为新bug。
{"title":"Detecting Simulink compiler bugs via controllable zombie blocks mutation","authors":"Shikai Guo, He Jiang, Zhihao Xu, Xiaochen Li, Zhilei Ren, Zhide Zhou, Rong Chen","doi":"10.1145/3540250.3549159","DOIUrl":"https://doi.org/10.1145/3540250.3549159","url":null,"abstract":"As a popular Cyber-Physical System (CPS) development tool chain, MathWorks Simulink is widely used to prototype CPS models in safety-critical applications, e.g., aerospace and healthcare. It is crucial to ensure the correctness and reliability of Simulink compiler (i.e., the compiler module of Simulink) in practice since all CPS models depend on compilation. However, Simulink compiler testing is challenging due to millions of lines of source code and the lack of the complete formal language specification. Although several methods have been proposed to automatically test Simulink compiler, there still remains two challenges to be tackled, namely the limited variant space and the insufficient mutation diversity. To address these challenges, we propose COMBAT, a new differential testing method for Simulink compiler testing. COMBAT includes an EMI (Equivalence Modulo Input) mutation component and a diverse variant generation component. The EMI mutation component inserts assertion statements (e.g., If /While blocks) at arbitrary points of the seed CPS model. These statements break each insertion point into true and false branches. Then, COMBAT feeds all the data passed through the insertion point into the true branch to preserve the equivalence of CPS variants. In such a way, the body of the false branch could be viewed as a new variant space, thus addressing the first challenge. The diverse variant generation component uses Markov chain Monte Carlo optimization to sample the seed CPS model and generate complex mutations of long sequences of blocks in the variant space, thus addressing the second challenge. Experiments demonstrate that COMBAT significantly outperforms the state-of-the-art approaches in Simulink compiler testing. Within five months, COMBAT has reported 16 valid bugs for Simulink R2021b, of which 11 bugs have been confirmed as new bugs by MathWorks Support.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73407463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigating and improving log parsing in practice 在实践中研究和改进日志解析
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3558947
Ying Fu, Meng Yan, Jian Xu, Jianguo Li, Zhongxin Liu, Xiaohong Zhang, Dan Yang
Logs are widely used for system behavior diagnosis by automatic log mining. Log parsing is an important data preprocessing step that converts semi-structured log messages into structured data as the feature input for log mining. Currently, many studies are devoted to proposing new log parsers. However, to the best of our knowledge, no previous study comprehensively investigates the effectiveness of log parsers in industrial practice. To investigate the effectiveness of the log parsers in industrial practice, in this paper, we conduct an empirical study on the effectiveness of six state-of-the-art log parsers on 10 microservice applications of Ant Group. Our empirical results highlight two challenges for log parsing in practice: 1) various separators. There are various separators in a log message, and the separators in different event templates or different applications are also various. Current log parsers cannot perform well because they do not consider various separators. 2) Various lengths due to nested objects. The log messages belonging to the same event template may also have various lengths due to nested objects. The log messages of 6 out of 10 microservice applications at Ant Group with various lengths due to nested objects. 4 out of 6 state-of-the-art log parsers cannot deal with various lengths due to nested objects. In this paper, we propose an improved log parser named Drain+ based on a state-of-the-art log parser Drain. Drain+ includes two innovative components to address the above two challenges: a statistical-based separators generation component, which generates separators automatically for log message splitting, and a candidate event template merging component, which merges the candidate event templates by a template similarity method. We evaluate the effectiveness of Drain+ on 10 microservice applications of Ant Group and 16 public datasets. The results show that Drain+ outperforms the six state-of-the-art log parsers on industrial applications and public datasets. Finally, we conclude the observations in the road ahead for log parsing to inspire other researchers and practitioners.
通过日志自动挖掘,日志被广泛用于系统行为诊断。日志解析是一个重要的数据预处理步骤,它将半结构化的日志消息转换为结构化数据,作为日志挖掘的特征输入。目前,许多研究都致力于提出新的日志解析器。然而,据我们所知,以前没有研究全面调查日志解析器在工业实践中的有效性。为了考察日志解析器在工业实践中的有效性,本文对蚂蚁集团10个微服务应用中6个最先进的日志解析器的有效性进行了实证研究。我们的实证结果突出了实践中日志解析的两个挑战:1)各种分隔符。日志消息中有各种分隔符,不同事件模板或不同应用程序中的分隔符也各不相同。当前的日志解析器不能很好地执行,因为它们没有考虑各种分隔符。2)由于嵌套对象的不同长度。由于嵌套对象的原因,属于同一事件模板的日志消息也可能具有不同的长度。蚂蚁集团10个微服务应用程序中有6个的日志消息由于嵌套对象而具有不同的长度。6个最先进的日志解析器中有4个无法处理由于嵌套对象而导致的各种长度。在本文中,我们提出了一个改进的日志解析器Drain+,它基于最先进的日志解析器Drain。Drain+包含两个创新的组件来解决上述两个挑战:一个基于统计的分隔符生成组件,它自动为日志消息拆分生成分隔符;一个候选事件模板合并组件,它通过模板相似度方法合并候选事件模板。我们在蚂蚁集团的10个微服务应用和16个公共数据集上评估了Drain+的有效性。结果表明,Drain+在工业应用程序和公共数据集上的性能优于六个最先进的日志解析器。最后,我们总结了日志解析在未来发展道路上的观察结果,以启发其他研究者和实践者。
{"title":"Investigating and improving log parsing in practice","authors":"Ying Fu, Meng Yan, Jian Xu, Jianguo Li, Zhongxin Liu, Xiaohong Zhang, Dan Yang","doi":"10.1145/3540250.3558947","DOIUrl":"https://doi.org/10.1145/3540250.3558947","url":null,"abstract":"Logs are widely used for system behavior diagnosis by automatic log mining. Log parsing is an important data preprocessing step that converts semi-structured log messages into structured data as the feature input for log mining. Currently, many studies are devoted to proposing new log parsers. However, to the best of our knowledge, no previous study comprehensively investigates the effectiveness of log parsers in industrial practice. To investigate the effectiveness of the log parsers in industrial practice, in this paper, we conduct an empirical study on the effectiveness of six state-of-the-art log parsers on 10 microservice applications of Ant Group. Our empirical results highlight two challenges for log parsing in practice: 1) various separators. There are various separators in a log message, and the separators in different event templates or different applications are also various. Current log parsers cannot perform well because they do not consider various separators. 2) Various lengths due to nested objects. The log messages belonging to the same event template may also have various lengths due to nested objects. The log messages of 6 out of 10 microservice applications at Ant Group with various lengths due to nested objects. 4 out of 6 state-of-the-art log parsers cannot deal with various lengths due to nested objects. In this paper, we propose an improved log parser named Drain+ based on a state-of-the-art log parser Drain. Drain+ includes two innovative components to address the above two challenges: a statistical-based separators generation component, which generates separators automatically for log message splitting, and a candidate event template merging component, which merges the candidate event templates by a template similarity method. We evaluate the effectiveness of Drain+ on 10 microservice applications of Ant Group and 16 public datasets. The results show that Drain+ outperforms the six state-of-the-art log parsers on industrial applications and public datasets. Finally, we conclude the observations in the road ahead for log parsing to inspire other researchers and practitioners.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"134 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77372831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
NL2Viz: natural language to visualization via constrained syntax-guided synthesis NL2Viz:通过约束语法引导合成的自然语言可视化
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549140
Zhengkai Wu, Vu Le, A. Tiwari, Sumit Gulwani, Arjun Radhakrishna, Ivan Radicek, Gustavo Soares, Xinyu Wang, Zhenwen Li, Tao Xie
Recent development in NL2CODE (Natural Language to Code) research allows end-users, especially novice programmers to create a concrete implementation of their ideas such as data visualization by providing natural language (NL) instructions. An NL2CODE system often fails to achieve its goal due to three major challenges: the user's words have contextual semantics, the user may not include all details needed for code generation, and the system results are imperfect and require further refinement. To address the aforementioned three challenges for NL to Visualization, we propose a new approach and its supporting tool named NL2VIZ with three salient features: (1) leveraging not only the user's NL input but also the data and program context that the NL query is upon, (2) using hard/soft constraints to reflect different confidence levels in the constraints retrieved from the user input and data/program context, and (3) providing support for result refinement and reuse. We implement NL2VIZ in the Jupyter Notebook environment and evaluate NL2VIZ on a real-world visualization benchmark and a public dataset to show the effectiveness of NL2VIZ. We also conduct a user study involving 6 data scientist professionals to demonstrate the usability of NL2VIZ, the readability of the generated code, and NL2VIZ's effectiveness in helping users generate desired visualizations effectively and efficiently.
NL2CODE(自然语言到代码)研究的最新发展允许最终用户,特别是新手程序员通过提供自然语言(NL)指令来创建他们的想法的具体实现,例如数据可视化。一个NL2CODE系统经常不能达到它的目标,因为三个主要的挑战:用户的话有上下文语义,用户可能不包括代码生成所需的所有细节,系统结果是不完美的,需要进一步的改进。为了解决上述NL可视化的三个挑战,我们提出了一种新的方法及其支持工具NL2VIZ,它具有三个显著特征:(1)不仅利用用户的NL输入,还利用NL查询所处的数据和程序上下文;(2)使用硬/软约束来反映从用户输入和数据/程序上下文检索到的约束的不同置信水平;(3)为结果精化和重用提供支持。我们在Jupyter Notebook环境中实现了NL2VIZ,并在现实世界的可视化基准和公共数据集上对NL2VIZ进行了评估,以显示NL2VIZ的有效性。我们还进行了一项涉及6名数据科学家专业人员的用户研究,以演示NL2VIZ的可用性、生成代码的可读性以及NL2VIZ在帮助用户有效和高效地生成所需可视化方面的有效性。
{"title":"NL2Viz: natural language to visualization via constrained syntax-guided synthesis","authors":"Zhengkai Wu, Vu Le, A. Tiwari, Sumit Gulwani, Arjun Radhakrishna, Ivan Radicek, Gustavo Soares, Xinyu Wang, Zhenwen Li, Tao Xie","doi":"10.1145/3540250.3549140","DOIUrl":"https://doi.org/10.1145/3540250.3549140","url":null,"abstract":"Recent development in NL2CODE (Natural Language to Code) research allows end-users, especially novice programmers to create a concrete implementation of their ideas such as data visualization by providing natural language (NL) instructions. An NL2CODE system often fails to achieve its goal due to three major challenges: the user's words have contextual semantics, the user may not include all details needed for code generation, and the system results are imperfect and require further refinement. To address the aforementioned three challenges for NL to Visualization, we propose a new approach and its supporting tool named NL2VIZ with three salient features: (1) leveraging not only the user's NL input but also the data and program context that the NL query is upon, (2) using hard/soft constraints to reflect different confidence levels in the constraints retrieved from the user input and data/program context, and (3) providing support for result refinement and reuse. We implement NL2VIZ in the Jupyter Notebook environment and evaluate NL2VIZ on a real-world visualization benchmark and a public dataset to show the effectiveness of NL2VIZ. We also conduct a user study involving 6 data scientist professionals to demonstrate the usability of NL2VIZ, the readability of the generated code, and NL2VIZ's effectiveness in helping users generate desired visualizations effectively and efficiently.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79749289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How to better utilize code graphs in semantic code search? 如何更好地利用代码图在语义代码搜索?
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549087
Yucen Shi, Ying Yin, Zhengkui Wang, David Lo, Tao Zhang, Xin Xia, Yuhai Zhao, Bowen Xu
Semantic code search greatly facilitates software reuse, which enables users to find code snippets highly matching user-specified natural language queries. Due to the rich expressive power of code graphs (e.g., control-flow graph and program dependency graph), both of the two mainstream research works (i.e., multi-modal models and pre-trained models) have attempted to incorporate code graphs for code modelling. However, they still have some limitations: First, there is still much room for improvement in terms of search effectiveness. Second, they have not fully considered the unique features of code graphs. In this paper, we propose a Graph-to-Sequence Converter, namely G2SC. Through converting the code graphs into lossless sequences, G2SC enables to address the problem of small graph learning using sequence feature learning and capture both the edges and nodes attribute information of code graphs. Thus, the effectiveness of code search can be greatly improved. In particular, G2SC first converts the code graph into a unique corresponding node sequence by a specific graph traversal strategy. Then, it gets a statement sequence by replacing each node with its corresponding statement. A set of carefully designed graph traversal strategies guarantee that the process is one-to-one and reversible. G2SC enables capturing rich semantic relationships (i.e., control flow, data flow, node/relationship properties) and provides learning model-friendly data transformation. It can be flexibly integrated with existing models to better utilize the code graphs. As a proof-of-concept application, we present two G2SC enabled models: GSMM (G2SC enabled multi-modal model) and GSCodeBERT (G2SC enabled CodeBERT model). Extensive experiment results on two real large-scale datasets demonstrate that GSMM and GSCodeBERT can greatly improve the state-of-the-art models MMAN and GraphCodeBERT by 92% and 22% on R@1, and 63% and 11.5% on MRR, respectively.
语义代码搜索极大地促进了软件的重用,使用户能够找到与用户指定的自然语言查询高度匹配的代码片段。由于代码图(如控制流图和程序依赖图)丰富的表达能力,两种主流的研究工作(即多模态模型和预训练模型)都试图将代码图纳入代码建模中。然而,它们仍然存在一些局限性:首先,在搜索效率方面仍有很大的改进空间。其次,他们没有充分考虑代码图的独特特性。在本文中,我们提出了一个图-序列转换器,即G2SC。通过将代码图转换为无损序列,G2SC能够利用序列特征学习解决小图学习问题,并捕获代码图的边和节点属性信息。因此,可以大大提高代码搜索的效率。特别是,G2SC首先通过特定的图遍历策略将代码图转换为唯一的对应节点序列。然后,它通过用相应的语句替换每个节点来获得语句序列。一组精心设计的图遍历策略保证了过程是一对一的和可逆的。G2SC支持捕获丰富的语义关系(即控制流、数据流、节点/关系属性),并提供学习模型友好的数据转换。它可以灵活地与现有模型集成,以更好地利用代码图。作为概念验证应用,我们提出了两个支持G2SC的模型:GSMM(支持G2SC的多模态模型)和GSCodeBERT(支持G2SC的CodeBERT模型)。在两个真实大规模数据集上的大量实验结果表明,GSMM和GSCodeBERT可以在R@1上大大提高最先进模型MMAN和GraphCodeBERT的92%和22%,MRR分别提高63%和11.5%。
{"title":"How to better utilize code graphs in semantic code search?","authors":"Yucen Shi, Ying Yin, Zhengkui Wang, David Lo, Tao Zhang, Xin Xia, Yuhai Zhao, Bowen Xu","doi":"10.1145/3540250.3549087","DOIUrl":"https://doi.org/10.1145/3540250.3549087","url":null,"abstract":"Semantic code search greatly facilitates software reuse, which enables users to find code snippets highly matching user-specified natural language queries. Due to the rich expressive power of code graphs (e.g., control-flow graph and program dependency graph), both of the two mainstream research works (i.e., multi-modal models and pre-trained models) have attempted to incorporate code graphs for code modelling. However, they still have some limitations: First, there is still much room for improvement in terms of search effectiveness. Second, they have not fully considered the unique features of code graphs. In this paper, we propose a Graph-to-Sequence Converter, namely G2SC. Through converting the code graphs into lossless sequences, G2SC enables to address the problem of small graph learning using sequence feature learning and capture both the edges and nodes attribute information of code graphs. Thus, the effectiveness of code search can be greatly improved. In particular, G2SC first converts the code graph into a unique corresponding node sequence by a specific graph traversal strategy. Then, it gets a statement sequence by replacing each node with its corresponding statement. A set of carefully designed graph traversal strategies guarantee that the process is one-to-one and reversible. G2SC enables capturing rich semantic relationships (i.e., control flow, data flow, node/relationship properties) and provides learning model-friendly data transformation. It can be flexibly integrated with existing models to better utilize the code graphs. As a proof-of-concept application, we present two G2SC enabled models: GSMM (G2SC enabled multi-modal model) and GSCodeBERT (G2SC enabled CodeBERT model). Extensive experiment results on two real large-scale datasets demonstrate that GSMM and GSCodeBERT can greatly improve the state-of-the-art models MMAN and GraphCodeBERT by 92% and 22% on R@1, and 63% and 11.5% on MRR, respectively.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81342607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Unite: an adapter for transforming analysis tools to web services via OSLC Unite:通过OSLC将分析工具转换为web服务的适配器
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3558939
O. Vašíček, Jan Fiedor, Tomas Kratochvila, B. Krena, A. Smrčka, Tomáš Vojnar
This paper describes Unite, a new tool intended as an adapter for transforming non-interactive command-line analysis tools to OSLC-compliant web services. Unite aims to make such tools easier to adopt and more convenient to use by allowing them to be accessible, both locally and remotely, in a unified way and to be easily integrated into various development environments. Open Services for Lifecycle Collaboration (OSLC) is an open standard for tool integration and was chosen for this task due to its robustness, extensibility, support of data from various domains, and its growing popularity. The work is motivated by allowing existing analysis tools to be more widely used with a strong emphasis on widening their industrial usage. We have implemented Unite and used it with multiple existing static as well as dynamic analysis and verification tools, and then successfully deployed it internationally in the industry to automate verification tasks for development teams in Honeywell. We discuss Honeywell's experience with using Unite and with OSLC in general. Moreover, we also provide the Unite Client (UniC) for Eclipse to allow users to easily run various analysis tools directly from the Eclipse IDE.
本文描述了Unite,这是一个新的工具,用于将非交互式命令行分析工具转换为oslc兼容的web服务。Unite的目标是使这些工具更容易采用,更方便使用,允许它们以统一的方式在本地和远程访问,并轻松集成到各种开发环境中。生命周期协作的开放服务(OSLC)是工具集成的开放标准,由于它的健壮性、可扩展性、对来自不同领域的数据的支持,以及它的日益流行,它被选择用于这项任务。这项工作的动机是允许更广泛地使用现有的分析工具,并强调扩大它们的工业用途。我们已经实现了Unite,并将其与多种现有的静态和动态分析和验证工具一起使用,然后成功地将其部署在国际行业中,为霍尼韦尔的开发团队自动化验证任务。我们讨论了霍尼韦尔在使用Unite和OSLC方面的经验。此外,我们还为Eclipse提供了unity客户端(UniC),使用户可以直接从Eclipse IDE中轻松运行各种分析工具。
{"title":"Unite: an adapter for transforming analysis tools to web services via OSLC","authors":"O. Vašíček, Jan Fiedor, Tomas Kratochvila, B. Krena, A. Smrčka, Tomáš Vojnar","doi":"10.1145/3540250.3558939","DOIUrl":"https://doi.org/10.1145/3540250.3558939","url":null,"abstract":"This paper describes Unite, a new tool intended as an adapter for transforming non-interactive command-line analysis tools to OSLC-compliant web services. Unite aims to make such tools easier to adopt and more convenient to use by allowing them to be accessible, both locally and remotely, in a unified way and to be easily integrated into various development environments. Open Services for Lifecycle Collaboration (OSLC) is an open standard for tool integration and was chosen for this task due to its robustness, extensibility, support of data from various domains, and its growing popularity. The work is motivated by allowing existing analysis tools to be more widely used with a strong emphasis on widening their industrial usage. We have implemented Unite and used it with multiple existing static as well as dynamic analysis and verification tools, and then successfully deployed it internationally in the industry to automate verification tasks for development teams in Honeywell. We discuss Honeywell's experience with using Unite and with OSLC in general. Moreover, we also provide the Unite Client (UniC) for Eclipse to allow users to easily run various analysis tools directly from the Eclipse IDE.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81349700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Peahen: fast and precise static deadlock detection via context reduction pehen:通过上下文还原快速精确的静态死锁检测
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549110
Yuandao Cai, Chengfeng Ye, Qingkai Shi, Charles Zhang
Deadlocks still severely inflict reliability and security issues upon software systems of the modern age. Worse still, as we note, in prior static deadlock detectors, good precision does not go hand-in-hand with high scalability --- their approaches are either context-insensitive, thereby engendering many false positives, or suffer from the calling context explosion to reach context-sensitive, thus compromising good efficiency. In this paper, we advocate Peahen, geared towards precise yet also scalable static deadlock detection. At its crux, Peahen decomposes the computational effort for embracing high precision into two cooperative analysis stages: (i) context-insensitive lock-graph construction, which selectively encodes the essential lock-acquisition information on each edge, and (ii) three precise yet lazy refinements, which incorporate such edge information into progressively refining the deadlock cycles in the lock graph only for a few interesting calling contexts. Our extensive experiments yield promising results: Peahen dramatically out-performs the state-of-the-art tools on accuracy without losing scalability; it can efficiently check million-line systems at a low false positive rate; and it has uncovered many confirmed deadlocks in dozens of mature open-source systems.
死锁仍然严重地给现代软件系统带来可靠性和安全性问题。更糟糕的是,正如我们注意到的那样,在以前的静态死锁检测器中,良好的精度与高可伸缩性并不同步——它们的方法要么是上下文不敏感的,从而产生许多误报,要么遭受调用上下文爆炸的影响,以达到上下文敏感,从而损害良好的效率。在本文中,我们提倡pehen,它面向精确且可扩展的静态死锁检测。在其核心,pehen将实现高精度的计算工作分解为两个协作分析阶段:(i)上下文不敏感的锁图构建,它有选择地对每个边缘上的基本锁获取信息进行编码;(ii)三个精确但懒惰的改进,它将这些边缘信息合并到锁图中,仅针对几个有趣的调用上下文逐步改进死锁周期。我们广泛的实验产生了有希望的结果:pehen在准确性上显著优于最先进的工具,而不会失去可扩展性;它能以较低的误报率有效地检测数百万在线系统;它还在几十个成熟的开源系统中发现了许多已确认的死锁。
{"title":"Peahen: fast and precise static deadlock detection via context reduction","authors":"Yuandao Cai, Chengfeng Ye, Qingkai Shi, Charles Zhang","doi":"10.1145/3540250.3549110","DOIUrl":"https://doi.org/10.1145/3540250.3549110","url":null,"abstract":"Deadlocks still severely inflict reliability and security issues upon software systems of the modern age. Worse still, as we note, in prior static deadlock detectors, good precision does not go hand-in-hand with high scalability --- their approaches are either context-insensitive, thereby engendering many false positives, or suffer from the calling context explosion to reach context-sensitive, thus compromising good efficiency. In this paper, we advocate Peahen, geared towards precise yet also scalable static deadlock detection. At its crux, Peahen decomposes the computational effort for embracing high precision into two cooperative analysis stages: (i) context-insensitive lock-graph construction, which selectively encodes the essential lock-acquisition information on each edge, and (ii) three precise yet lazy refinements, which incorporate such edge information into progressively refining the deadlock cycles in the lock graph only for a few interesting calling contexts. Our extensive experiments yield promising results: Peahen dramatically out-performs the state-of-the-art tools on accuracy without losing scalability; it can efficiently check million-line systems at a low false positive rate; and it has uncovered many confirmed deadlocks in dozens of mature open-source systems.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89369826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
What motivates software practitioners to contribute to inner source? 是什么激励软件从业者为内部源代码做出贡献?
Pub Date : 2022-11-07 DOI: 10.1145/3540250.3549148
Zhiyuan Wan, Xin Xia, Yun Zhang, David Lo, Daibing Zhou, Qiuyuan Chen, A. Hassan
Software development organizations have adopted open source development practices to support or augment their software development processes, a phenomenon referred to as inner source. Given the rapid adoption of inner source, we wonder what motivates software practitioners to contribute to inner source projects. We followed a mixed-methods approach--a qualitative phase of interviews with 20 interviewees, followed by a quantitative phase of an exploratory survey with 124 respondents from 13 countries across four continents. Our study uncovers practitioners' motivation to contribute to inner source projects, as well as how the motivation differs from what motivates practitioners to participate in open source projects. We also investigate how software practitioners' motivation impacts their contribution level and continuance intention in inner source projects. Based on our findings, we outline directions for future research and provide recommendations for organizations and software practitioners.
软件开发组织已经采用开源开发实践来支持或扩展他们的软件开发过程,这种现象被称为内部源代码。考虑到内部源代码的快速采用,我们想知道是什么激励软件从业者为内部源代码项目做出贡献。我们采用了混合方法——首先是对20名受访者进行定性访谈,然后是对来自四大洲13个国家的124名受访者进行定量探索性调查。我们的研究揭示了从业者为内部源代码项目做出贡献的动机,以及这种动机与激励从业者参与开源项目的动机有何不同。我们还研究了软件从业者的动机如何影响他们在内源项目中的贡献水平和继续意愿。基于我们的发现,我们概述了未来研究的方向,并为组织和软件从业者提供了建议。
{"title":"What motivates software practitioners to contribute to inner source?","authors":"Zhiyuan Wan, Xin Xia, Yun Zhang, David Lo, Daibing Zhou, Qiuyuan Chen, A. Hassan","doi":"10.1145/3540250.3549148","DOIUrl":"https://doi.org/10.1145/3540250.3549148","url":null,"abstract":"Software development organizations have adopted open source development practices to support or augment their software development processes, a phenomenon referred to as inner source. Given the rapid adoption of inner source, we wonder what motivates software practitioners to contribute to inner source projects. We followed a mixed-methods approach--a qualitative phase of interviews with 20 interviewees, followed by a quantitative phase of an exploratory survey with 124 respondents from 13 countries across four continents. Our study uncovers practitioners' motivation to contribute to inner source projects, as well as how the motivation differs from what motivates practitioners to participate in open source projects. We also investigate how software practitioners' motivation impacts their contribution level and continuance intention in inner source projects. Based on our findings, we outline directions for future research and provide recommendations for organizations and software practitioners.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89523866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
软件产业与工程
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1