首页 > 最新文献

2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)最新文献

英文 中文
A Parallel Worklist Algorithm for Modular Analyses 模块化分析的并行工作表算法
Noah Van Es, Quentin Stiévenart, J. V. D. Plas, Coen De Roover
One way to speed up static program analysis is to make use of today’s multi-core CPUs by parallelising the analysis. Existing work on parallel analysis usually targets traditional data-flow analyses for static, first-order languages such as C. Less attention has been given so far to the parallelisation of more general analyses that can also target dynamic, higher-order languages such as JavaScript. These are significantly more challenging to parallelise, as dependencies between analysis results are only discovered during the analysis itself. State-of the-art parallel analyses for such languages are therefore usually limited, both in their applicability and performance gains. In this work, we propose the parallelisation of modular analyses. Modular analyses compute different parts of the analysis in isolation of one another, and therefore offer inherent opportunities for parallelisation that have not been explored so far. In addition, they can be used to develop a general class of analysers for dynamic, higher-order languages. We present a parallel variant of the worklist algorithm that is used to drive such modular analyses. To further speed up its convergence, we show how this algorithm can exploit the monotonicity of the analysis. Existing modular analyses can be parallelised without additional effort by instead employing this parallel worklist algorithm. We demonstrate this for ModF, an inter-procedural modular analysis, and for ModConc, an inter-process modular analysis. For ModConc, we reveal an additional opportunity to exploit even more parallelism in the analysis. Our parallel worklist algorithm is implemented and integrated into MAF, a framework for modular program analysis. Using a set of Scheme benchmarks for ModF, we usually observe speedups between $3times$ and $8times$ when using 4 workers, and speedups between $8times$ and $32times$ when using 16 workers. For ModConc, we achieve a maximum speedup of $15times$.
加速静态程序分析的一种方法是通过并行分析来利用当今的多核cpu。现有的并行分析工作通常针对静态、一阶语言(如c)的传统数据流分析,迄今为止,对更通用的分析(也可以针对动态、高阶语言(如JavaScript))的并行化关注较少。因为分析结果之间的依赖关系只有在分析过程中才会被发现。因此,针对此类语言的最先进的并行分析通常在适用性和性能增益方面都是有限的。在这项工作中,我们提出了模块化分析的并行化。模块化分析在相互隔离的情况下计算分析的不同部分,因此提供了迄今为止尚未探索的并行化的固有机会。此外,它们还可用于开发一类通用的动态高阶语言分析器。我们提出了工作列表算法的并行变体,用于驱动这种模块化分析。为了进一步加快其收敛速度,我们展示了该算法如何利用分析的单调性。现有的模块化分析可以并行化,而不需要额外的努力,而是采用并行工作列表算法。我们为ModF(一种程序间模块化分析)和ModConc(一种进程间模块化分析)演示了这一点。对于ModConc,我们揭示了在分析中利用更多并行性的额外机会。我们的并行工作表算法被实现并集成到模块化程序分析框架MAF中。使用ModF的一组Scheme基准测试,当使用4个worker时,我们通常观察到加速在$3times$和$8times$之间,当使用16个worker时,加速在$8times$和$32times$之间。对于ModConc,我们实现了$15times$的最大加速。
{"title":"A Parallel Worklist Algorithm for Modular Analyses","authors":"Noah Van Es, Quentin Stiévenart, J. V. D. Plas, Coen De Roover","doi":"10.1109/SCAM51674.2020.00006","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00006","url":null,"abstract":"One way to speed up static program analysis is to make use of today’s multi-core CPUs by parallelising the analysis. Existing work on parallel analysis usually targets traditional data-flow analyses for static, first-order languages such as C. Less attention has been given so far to the parallelisation of more general analyses that can also target dynamic, higher-order languages such as JavaScript. These are significantly more challenging to parallelise, as dependencies between analysis results are only discovered during the analysis itself. State-of the-art parallel analyses for such languages are therefore usually limited, both in their applicability and performance gains. In this work, we propose the parallelisation of modular analyses. Modular analyses compute different parts of the analysis in isolation of one another, and therefore offer inherent opportunities for parallelisation that have not been explored so far. In addition, they can be used to develop a general class of analysers for dynamic, higher-order languages. We present a parallel variant of the worklist algorithm that is used to drive such modular analyses. To further speed up its convergence, we show how this algorithm can exploit the monotonicity of the analysis. Existing modular analyses can be parallelised without additional effort by instead employing this parallel worklist algorithm. We demonstrate this for ModF, an inter-procedural modular analysis, and for ModConc, an inter-process modular analysis. For ModConc, we reveal an additional opportunity to exploit even more parallelism in the analysis. Our parallel worklist algorithm is implemented and integrated into MAF, a framework for modular program analysis. Using a set of Scheme benchmarks for ModF, we usually observe speedups between $3times$ and $8times$ when using 4 workers, and speedups between $8times$ and $32times$ when using 16 workers. For ModConc, we achieve a maximum speedup of $15times$.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117111368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Ad hoc Test Generation Through Binary Rewriting 通过二进制重写生成特殊测试
Anthony Saieva, S. Singh, G. Kaiser
When a security vulnerability or other critical bug is not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This can be challenging when factors in a specific user environment triggered the bug. If enabled, however, record-replay technology faithfully replays the execution in the developer environment as if the program were executing in that user environment under the same conditions as the bug manifested. This includes intermediate program states dependent on system calls, memory layout, etc. as well as any externally-visible behavior. Many modern record-replay tools integrate interactive debuggers, to help locate the root cause, but don’t help the developers test whether their patch indeed eliminates the bug under those same conditions. In particular, modern record-replay tools that reproduce intermediate program state cannot replay recordings made with one version of a program using a different version of the program where the differences affect program state. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. These tests reflect the arbitrary (ad hoc) user and system circumstances that uncovered the bug, enabling developers to check whether a patch indeed fixes that bug. The tests essentially replay recordings made with one version of a program using a different version of the program, even when the the differences impact program state, by manipulating both the binary executable and the recorded log to result in an execution consistent with what would have happened had the the patched version executed in the user environment under the same conditions where the bug manifested with the original version. Our approach also enables users to make new recordings of their own workloads with the original version of the program, and automatically generate and run the corresponding ad hoc tests on the patched version, to validate that the patch does not break functionality they rely on.
当开发人员的测试套件没有检测到安全漏洞或其他关键错误,并且在部署后发现时,开发人员必须快速设计一个新的测试来重现有错误的行为。那么开发人员需要测试他们的候选人是否确实补丁修复bug,在不破坏其他功能,而竞相部署之前攻击者突然袭击暴露用户安装。当特定用户环境中的因素触发bug时,这可能具有挑战性。但是,如果启用了record-replay技术,则会忠实地在开发人员环境中重播执行,就好像程序是在与出现错误的相同条件下在用户环境中执行一样。这包括依赖于系统调用、内存布局等的中间程序状态,以及任何外部可见的行为。许多现代record-replay工具集成交互式调试器,帮助找到问题的根源,但不要帮助开发人员测试他们的补丁是否确实消除了错误在相同的条件下。特别是,复制中间程序状态的现代记录重放工具不能使用不同版本的程序重放一个版本的程序录制的记录,因为差异会影响程序状态。这项工作建立在record-replay和二进制重写为候选人自动生成并运行目标测试补丁显著更快和更有效地生成符号执行技术比传统的测试套件。这些测试反映任意(临时)用户和系统的情况下,发现错误,使开发人员能够检查是否确实一个补丁修复bug。测试本质上是用不同版本的程序重放一个版本的程序记录,即使差异会影响程序状态,通过操纵二进制可执行文件和记录的日志,使执行结果与在用户环境中以原始版本出现错误的相同条件执行补丁版本时所发生的情况一致。我们的方法还使用户能够使用程序的原始版本对他们自己的工作负载进行新的记录,并自动生成和运行补丁版本上相应的特别测试,以验证补丁不会破坏他们所依赖的功能。
{"title":"Ad hoc Test Generation Through Binary Rewriting","authors":"Anthony Saieva, S. Singh, G. Kaiser","doi":"10.1109/SCAM51674.2020.00018","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00018","url":null,"abstract":"When a security vulnerability or other critical bug is not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This can be challenging when factors in a specific user environment triggered the bug. If enabled, however, record-replay technology faithfully replays the execution in the developer environment as if the program were executing in that user environment under the same conditions as the bug manifested. This includes intermediate program states dependent on system calls, memory layout, etc. as well as any externally-visible behavior. Many modern record-replay tools integrate interactive debuggers, to help locate the root cause, but don’t help the developers test whether their patch indeed eliminates the bug under those same conditions. In particular, modern record-replay tools that reproduce intermediate program state cannot replay recordings made with one version of a program using a different version of the program where the differences affect program state. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. These tests reflect the arbitrary (ad hoc) user and system circumstances that uncovered the bug, enabling developers to check whether a patch indeed fixes that bug. The tests essentially replay recordings made with one version of a program using a different version of the program, even when the the differences impact program state, by manipulating both the binary executable and the recorded log to result in an execution consistent with what would have happened had the the patched version executed in the user environment under the same conditions where the bug manifested with the original version. Our approach also enables users to make new recordings of their own workloads with the original version of the program, and automatically generate and run the corresponding ad hoc tests on the patched version, to validate that the patch does not break functionality they rely on.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125146607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DepGraph: Localizing Performance Bottlenecks in Multi-Core Applications Using Waiting Dependency Graphs and Software Tracing 使用等待依赖图和软件跟踪来定位多核应用程序中的性能瓶颈
Naser Ezzati-Jivan, Quentin Fournier, M. Dagenais, A. Hamou-Lhadj
This paper addresses the challenge of understanding the waiting dependencies between the threads and hardware resources required to complete a task. The objective is to improve software performance by detecting the underlying bottlenecks caused by system-level blocking dependencies. In this paper, we use a system level tracing approach to extract a Waiting Dependency Graph that shows the breakdown of a task execution among all the interleaving threads and resources. The method allows developers and system administrators to quickly discover how the total execution time is divided among its interacting threads and resources. Ultimately, the method helps detecting bottlenecks and highlighting their possible causes. Our experiments show the effectiveness of the proposed approach in several industry-level use cases. Three performance anomalies are analysed and explained using the proposed approach. Evaluating the method efficiency reveals that the imposed overhead never exceeds 10.1%, therefore making it suitable for in-production environments.
本文解决了理解完成任务所需的线程和硬件资源之间的等待依赖关系的挑战。目标是通过检测由系统级阻塞依赖关系引起的底层瓶颈来提高软件性能。在本文中,我们使用系统级跟踪方法来提取等待依赖图,该图显示了任务执行在所有交叉线程和资源之间的分解。该方法允许开发人员和系统管理员快速发现如何在交互线程和资源之间分配总执行时间。最终,该方法有助于检测瓶颈并突出显示其可能的原因。我们的实验证明了所提出的方法在几个行业级用例中的有效性。使用该方法分析和解释了三种性能异常。评估该方法的效率表明,强加的开销从未超过10.1%,因此使其适合于生产环境。
{"title":"DepGraph: Localizing Performance Bottlenecks in Multi-Core Applications Using Waiting Dependency Graphs and Software Tracing","authors":"Naser Ezzati-Jivan, Quentin Fournier, M. Dagenais, A. Hamou-Lhadj","doi":"10.1109/SCAM51674.2020.00022","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00022","url":null,"abstract":"This paper addresses the challenge of understanding the waiting dependencies between the threads and hardware resources required to complete a task. The objective is to improve software performance by detecting the underlying bottlenecks caused by system-level blocking dependencies. In this paper, we use a system level tracing approach to extract a Waiting Dependency Graph that shows the breakdown of a task execution among all the interleaving threads and resources. The method allows developers and system administrators to quickly discover how the total execution time is divided among its interacting threads and resources. Ultimately, the method helps detecting bottlenecks and highlighting their possible causes. Our experiments show the effectiveness of the proposed approach in several industry-level use cases. Three performance anomalies are analysed and explained using the proposed approach. Evaluating the method efficiency reveals that the imposed overhead never exceeds 10.1%, therefore making it suitable for in-production environments.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Static Extraction of Enforced Authorization Policies SeeAuthz 强制授权策略的静态提取参见authz
Bernhard J. Berger, Rodrigue Wete Nguempnang, K. Sohr, R. Koschke
Authorization is an intrinsic part of a software’s security. Determining whether a user is allowed to access a resource or not is crucial, not only in safety-critical applications but also in everyday applications to prevent misuse of data or software. There is plenty of research dealing with validating and verifying authorization policies in the security community. Still, an implemented authorization policy does not necessarily match the planned authorization policy, i.e., even a validated and verified authorization policy can pose security issues when implemented incorrectly. This gap between planned and implemented authorization policy poses the risk of unauthorized access to sensitive resources due to insufficient authorization checks. Therefore, it is essential to ensure a system’s security to validate the implemented authorization policy against the planned one. We, therefore, describe the authorization pattern and present an algorithm to extract authorization graphs from implemented authorization policies, which can then be used to compare against the planned authorization policy. To that end, we developed a configurable context-sensitive analysis tailored to Java-based software systems, where the context is the authorization facts that hold on each point. Using a configuration for Apache Shiro, a security library that supports authorization, we evaluated our implementation using an open-source repository system for the management and dissemination of digital content and a closed-source manufacturing execution system. We discuss additional usage scenarios of the analysis results and describe how to transfer the approach to other authorization policies and programming languages.
授权是软件安全性的内在组成部分。确定是否允许用户访问资源是至关重要的,不仅在安全关键型应用程序中如此,在日常应用程序中也是如此,以防止滥用数据或软件。在安全社区中,有大量关于验证和验证授权策略的研究。但是,实现的授权策略不一定与计划的授权策略匹配,也就是说,即使是经过验证和验证的授权策略,在不正确地实现时也可能造成安全问题。计划的授权策略和实现的授权策略之间的这种差距会导致由于授权检查不足而导致对敏感资源进行未经授权访问的风险。因此,有必要根据计划的授权策略验证已实现的授权策略,从而确保系统的安全性。因此,我们将描述授权模式,并给出一种算法,从已实现的授权策略中提取授权图,然后将其用于与计划的授权策略进行比较。为此,我们开发了针对基于java的软件系统的可配置上下文敏感分析,其中上下文是保存在每个点上的授权事实。使用Apache Shiro(一个支持授权的安全库)的配置,我们使用一个用于管理和传播数字内容的开源存储库系统和一个闭源制造执行系统来评估我们的实现。我们将讨论分析结果的其他使用场景,并描述如何将该方法转换为其他授权策略和编程语言。
{"title":"Static Extraction of Enforced Authorization Policies SeeAuthz","authors":"Bernhard J. Berger, Rodrigue Wete Nguempnang, K. Sohr, R. Koschke","doi":"10.1109/SCAM51674.2020.00026","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00026","url":null,"abstract":"Authorization is an intrinsic part of a software’s security. Determining whether a user is allowed to access a resource or not is crucial, not only in safety-critical applications but also in everyday applications to prevent misuse of data or software. There is plenty of research dealing with validating and verifying authorization policies in the security community. Still, an implemented authorization policy does not necessarily match the planned authorization policy, i.e., even a validated and verified authorization policy can pose security issues when implemented incorrectly. This gap between planned and implemented authorization policy poses the risk of unauthorized access to sensitive resources due to insufficient authorization checks. Therefore, it is essential to ensure a system’s security to validate the implemented authorization policy against the planned one. We, therefore, describe the authorization pattern and present an algorithm to extract authorization graphs from implemented authorization policies, which can then be used to compare against the planned authorization policy. To that end, we developed a configurable context-sensitive analysis tailored to Java-based software systems, where the context is the authorization facts that hold on each point. Using a configuration for Apache Shiro, a security library that supports authorization, we evaluated our implementation using an open-source repository system for the management and dissemination of digital content and a closed-source manufacturing execution system. We discuss additional usage scenarios of the analysis results and describe how to transfer the approach to other authorization policies and programming languages.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116026838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does code review really remove coding convention violations? 代码审查是否真的消除了违反编码约定的情况?
Donggyun Han, Chaiyong Ragkhitwetsagul, J. Krinke, M. Paixão, Giovanni Rosa
Many software developers perceive technical debt as the biggest problems in their projects. They also perceive code reviews as the most important process to increase code quality. As inconsistent coding style is one source of technical debt, it is no surprise that coding convention violations can lead to patch rejection during code review. However, as most research has focused on developer’s perception, it is not clear whether code reviews actually prevent the introduction of coding convention violations and the corresponding technical debt.Therefore, we investigated how coding convention violations are introduced, addressed, and removed during code review by developers. To do this, we analysed 16,442 code review requests from four projects of the Eclipse community for the introduction of convention violations. Our result shows that convention violations accumulate as code size increases despite changes being reviewed. We also manually investigated 1,268 code review requests in which convention violations disappear and observed that only a minority of them have been removed because a convention violation has been flagged in a review comment. The investigation results also highlight that one can speed up the code review process by adopting tools for code convention violation detection.
许多软件开发人员认为技术债务是他们项目中最大的问题。他们还认为代码审查是提高代码质量的最重要的过程。由于不一致的编码风格是技术债务的一个来源,因此违反编码惯例可能导致代码审查期间的补丁拒绝也就不足为奇了。然而,由于大多数研究都集中在开发人员的看法上,因此并不清楚代码审查是否真的可以防止引入违反编码约定和相应的技术债务。因此,我们调查了开发人员在代码审查期间如何引入、处理和删除违反编码约定的情况。为此,我们分析了来自Eclipse社区四个项目的16,442个代码审查请求,以引入违反约定的情况。我们的结果表明,违反约定的情况随着代码大小的增加而累积,尽管对更改进行了审查。我们还手工调查了1268个违反约定的代码审查请求,并观察到其中只有一小部分被删除了,因为在审查评论中标记了违反约定的内容。调查结果还强调,采用代码约定违规检测工具可以加快代码审查过程。
{"title":"Does code review really remove coding convention violations?","authors":"Donggyun Han, Chaiyong Ragkhitwetsagul, J. Krinke, M. Paixão, Giovanni Rosa","doi":"10.1109/SCAM51674.2020.00010","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00010","url":null,"abstract":"Many software developers perceive technical debt as the biggest problems in their projects. They also perceive code reviews as the most important process to increase code quality. As inconsistent coding style is one source of technical debt, it is no surprise that coding convention violations can lead to patch rejection during code review. However, as most research has focused on developer’s perception, it is not clear whether code reviews actually prevent the introduction of coding convention violations and the corresponding technical debt.Therefore, we investigated how coding convention violations are introduced, addressed, and removed during code review by developers. To do this, we analysed 16,442 code review requests from four projects of the Eclipse community for the introduction of convention violations. Our result shows that convention violations accumulate as code size increases despite changes being reviewed. We also manually investigated 1,268 code review requests in which convention violations disappear and observed that only a minority of them have been removed because a convention violation has been flagged in a review comment. The investigation results also highlight that one can speed up the code review process by adopting tools for code convention violation detection.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122120970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An Approach for the Identification of Information Leakage in Automotive Infotainment systems 汽车信息娱乐系统信息泄漏识别方法研究
A. Moiz, Manar H. Alalfi
The advancements in the digitization world has revolutionized the automotive industry. Today’s modern cars are equipped with internet, computers that can provide autonomous driving functionalities as well as infotainment systems that can run mobile operating systems, like Android Auto and Apple CarPlay. Android Automotive is Google’s android operating system tailored to run natively on vehicle’s infotainment systems, it allows third party apps to be installed and run on vehicle’s infotainment systems. Such apps may raise security concerns related to user’s safety, security and privacy. This paper investigates security concerns of in-vehicle apps, specifically, those related to inter component communication (ICC) among these apps. ICC allows apps to share information via inter or intra apps components through a messaging object called intent. In case of insecure communication, Intent can be hijacked or spoofed by malicious apps and user’s sensitive information can be leaked to hacker’s database. We investigate the attack surface and vulnerabilities in these apps and provide a static analysis approach and a tool to find data leakage vulnerabilities. The approach can also provide hints to mitigate these leaks. We evaluate our approach by analyzing a set of Android Auto apps downloaded from Google Play store, and we report our validated results on vulnerabilities identified on those apps.
数字化世界的进步彻底改变了汽车行业。今天的现代汽车配备了互联网,可以提供自动驾驶功能的电脑,以及可以运行移动操作系统的信息娱乐系统,如Android Auto和Apple CarPlay。Android Automotive是谷歌为车载信息娱乐系统量身定制的安卓操作系统,它允许第三方应用程序在车载信息娱乐系统上安装和运行。此类应用程序可能会引发与用户安全、安保和隐私相关的安全担忧。本文研究了车载应用程序的安全问题,特别是与这些应用程序之间的组件间通信(ICC)相关的问题。ICC允许应用通过一个名为intent的消息传递对象,在应用内部或应用内部组件之间共享信息。如果通信不安全,Intent可能会被恶意应用劫持或欺骗,用户的敏感信息可能会泄露到黑客的数据库中。我们研究了这些应用程序的攻击面和漏洞,并提供了一种静态分析方法和工具来发现数据泄漏漏洞。该方法还可以提供减轻这些泄漏的提示。我们通过分析从Google Play商店下载的一组Android Auto应用来评估我们的方法,并报告我们对这些应用中发现的漏洞的验证结果。
{"title":"An Approach for the Identification of Information Leakage in Automotive Infotainment systems","authors":"A. Moiz, Manar H. Alalfi","doi":"10.1109/SCAM51674.2020.00017","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00017","url":null,"abstract":"The advancements in the digitization world has revolutionized the automotive industry. Today’s modern cars are equipped with internet, computers that can provide autonomous driving functionalities as well as infotainment systems that can run mobile operating systems, like Android Auto and Apple CarPlay. Android Automotive is Google’s android operating system tailored to run natively on vehicle’s infotainment systems, it allows third party apps to be installed and run on vehicle’s infotainment systems. Such apps may raise security concerns related to user’s safety, security and privacy. This paper investigates security concerns of in-vehicle apps, specifically, those related to inter component communication (ICC) among these apps. ICC allows apps to share information via inter or intra apps components through a messaging object called intent. In case of insecure communication, Intent can be hijacked or spoofed by malicious apps and user’s sensitive information can be leaked to hacker’s database. We investigate the attack surface and vulnerabilities in these apps and provide a static analysis approach and a tool to find data leakage vulnerabilities. The approach can also provide hints to mitigate these leaks. We evaluate our approach by analyzing a set of Android Auto apps downloaded from Google Play store, and we report our validated results on vulnerabilities identified on those apps.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124192168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Looking for Software Defects? First Find the Nonconformists 寻找软件缺陷?首先找到不墨守成规的人
Sara Moshtari, Joanna C. S. Santos, Mehdi Mirakhorli, A. Okutan
Software defect prediction models play a key role to increase the quality and reliability of software systems. Because, they are used to identify defect prone source code components and assist testing activities during the development life cycle. Prior research used supervised and unsupervised Machine Learning models for software defect prediction. Supervised defect prediction models require labeled data, however it might be time consuming and expensive to obtain labeled data that has the desired quality and volume. The unsupervised defect prediction models usually use clustering techniques to relax the labeled data requirement, however labeling detected clusters as defective is a challenging task. The Pareto principle states that a small number of modules contain most of the defects. Getting inspired from the Pareto principle, this work proposes a novel, unsupervised learning approach that is based on outlier detection. We hypothesize that defect prone software components have different characteristics when compared to others and can be considered as outliers, therefore outlier detection techniques can be used to identify them. The experiment results on 16 software projects from two publicly available datasets (PROMISE and GitHub) indicate that the k-Nearest Neighbor (KNN) outlier detection method can be used to identify the majority of software defects. It could detect 94% of expected defects at best case and more than 63% of the defects in 75% of the projects. We compare our approach with the state-of-the-art supervised and unsupervised defect prediction approaches. The results of rigorous empirical evaluations indicate that the proposed approach outperforms existing unsupervised models and achieves comparable results with the leading supervised techniques that rely on complex training and tuning algorithms.
软件缺陷预测模型对提高软件系统的质量和可靠性起着至关重要的作用。因为,它们被用来识别有缺陷的源代码组件,并在开发生命周期中协助测试活动。先前的研究使用监督和非监督机器学习模型进行软件缺陷预测。监督缺陷预测模型需要标记数据,然而,获得具有所需质量和数量的标记数据可能既耗时又昂贵。无监督缺陷预测模型通常使用聚类技术来放宽对标记数据的要求,但将检测到的聚类标记为缺陷是一项具有挑战性的任务。帕累托原则指出,少数模块包含了大多数缺陷。受到帕累托原理的启发,这项工作提出了一种基于离群值检测的新颖无监督学习方法。我们假设有缺陷的软件组件与其他组件相比具有不同的特征,并且可以被视为离群值,因此可以使用离群值检测技术来识别它们。来自两个公开可用数据集(PROMISE和GitHub)的16个软件项目的实验结果表明,k-最近邻(KNN)离群值检测方法可用于识别大多数软件缺陷。在最好的情况下,它可以检测到94%的预期缺陷,并在75%的项目中检测到超过63%的缺陷。我们将我们的方法与最先进的监督缺陷预测方法和无监督缺陷预测方法进行比较。严格的实证评估结果表明,所提出的方法优于现有的无监督模型,并与依赖复杂训练和调优算法的领先监督技术取得了相当的结果。
{"title":"Looking for Software Defects? First Find the Nonconformists","authors":"Sara Moshtari, Joanna C. S. Santos, Mehdi Mirakhorli, A. Okutan","doi":"10.1109/SCAM51674.2020.00014","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00014","url":null,"abstract":"Software defect prediction models play a key role to increase the quality and reliability of software systems. Because, they are used to identify defect prone source code components and assist testing activities during the development life cycle. Prior research used supervised and unsupervised Machine Learning models for software defect prediction. Supervised defect prediction models require labeled data, however it might be time consuming and expensive to obtain labeled data that has the desired quality and volume. The unsupervised defect prediction models usually use clustering techniques to relax the labeled data requirement, however labeling detected clusters as defective is a challenging task. The Pareto principle states that a small number of modules contain most of the defects. Getting inspired from the Pareto principle, this work proposes a novel, unsupervised learning approach that is based on outlier detection. We hypothesize that defect prone software components have different characteristics when compared to others and can be considered as outliers, therefore outlier detection techniques can be used to identify them. The experiment results on 16 software projects from two publicly available datasets (PROMISE and GitHub) indicate that the k-Nearest Neighbor (KNN) outlier detection method can be used to identify the majority of software defects. It could detect 94% of expected defects at best case and more than 63% of the defects in 75% of the projects. We compare our approach with the state-of-the-art supervised and unsupervised defect prediction approaches. The results of rigorous empirical evaluations indicate that the proposed approach outperforms existing unsupervised models and achieves comparable results with the leading supervised techniques that rely on complex training and tuning algorithms.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123621815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimizing Away JavaScript Obfuscation 优化消除JavaScript混淆
Adrián Herrera
JavaScript is a popular attack vector for releasing malicious payloads on unsuspecting Internet users. Authors of this malicious JavaScript often employ numerous obfuscation techniques in order to prevent the automatic detection by antivirus and hinder manual analysis by professional malware analysts. Consequently, this paper presents SAFE-DEOBS, a JavaScript deobfuscation tool that we have built. The aim of SAFE-DEOBS is to automatically deobfuscate JavaScript malware such that an analyst can more rapidly determine the malicious script’s intent. This is achieved through a number of static analyses, inspired by techniques from compiler theory. We demonstrate the utility of SAFE-DEOBS through a case study on real-world JavaScript malware, and show that it is a useful addition to a malware analyst’s toolset.
JavaScript是一种流行的攻击媒介,用于向毫无戒心的互联网用户释放恶意载荷。这种恶意JavaScript的作者经常使用许多混淆技术,以防止反病毒软件的自动检测,并阻碍专业恶意软件分析师的手动分析。因此,本文介绍了SAFE-DEOBS,这是我们构建的一个JavaScript去混淆工具。SAFE-DEOBS的目的是自动消除JavaScript恶意软件的混淆,以便分析人员可以更快地确定恶意脚本的意图。这是通过许多静态分析实现的,这些分析受到编译器理论技术的启发。我们通过对真实世界JavaScript恶意软件的案例研究来演示SAFE-DEOBS的实用性,并表明它是恶意软件分析师工具集的有用补充。
{"title":"Optimizing Away JavaScript Obfuscation","authors":"Adrián Herrera","doi":"10.1109/SCAM51674.2020.00029","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00029","url":null,"abstract":"JavaScript is a popular attack vector for releasing malicious payloads on unsuspecting Internet users. Authors of this malicious JavaScript often employ numerous obfuscation techniques in order to prevent the automatic detection by antivirus and hinder manual analysis by professional malware analysts. Consequently, this paper presents SAFE-DEOBS, a JavaScript deobfuscation tool that we have built. The aim of SAFE-DEOBS is to automatically deobfuscate JavaScript malware such that an analyst can more rapidly determine the malicious script’s intent. This is achieved through a number of static analyses, inspired by techniques from compiler theory. We demonstrate the utility of SAFE-DEOBS through a case study on real-world JavaScript malware, and show that it is a useful addition to a malware analyst’s toolset.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123852489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Failure of One, Fall of Many: An Exploratory Study of Software Features for Defect Prediction 一个人的失败,许多人的失败:缺陷预测软件特性的探索性研究
G. E. D. Santos, Eduardo Figueiredo
Software defect prediction represents an area of interest in both academia and the software industry. Thus, software defects are prevalent in software development and might generate numerous difficulties for users and developers apart. The current literature offers multiple alternative approaches to predict the likelihood of defects in the source code. Most of these studies concentrate on predicting defects from a broad set of software features. As a result, the individual discriminating power of software features is still unknown as some perform well only with specific projects or metrics. In this study, we applied machine learning techniques in a popular dataset. This data has information about software defects in five Java projects, containing 5,371 classes and 37 software features. To this aim, we convey an exploratory investigation that produced hundreds of thousands of machine learning models from a diverse collection of software features. These models are random in the sense that they promptly select the features from the entire pool of features. Even though the immense majority of models are ineffective, we could produce several models that yield accurate predictions, thus classifying defects from Java project classes. Among these accurate models, our results indicate that change metric features are more present than entropy or class-level metrics. We concentrated our analysis on models that rank a randomly chosen defective class higher than a casually selected clean class with over 80% accuracy. We also report and discuss some features contributing to the explanation of model decisions. Therefore, our study promotes reasoning on which features support predicting defects in these projects. Finally, we present the implications of our work to practitioners.
软件缺陷预测是学术界和软件业都感兴趣的一个领域。因此,软件缺陷在软件开发中很普遍,并且可能会给用户和开发人员带来许多困难。目前的文献提供了多种替代方法来预测源代码中缺陷的可能性。这些研究大多集中在从广泛的软件特性集预测缺陷上。因此,软件特性的个体辨别能力仍然是未知的,因为有些特性只在特定的项目或度量标准中表现良好。在这项研究中,我们将机器学习技术应用于一个流行的数据集。该数据包含五个Java项目中的软件缺陷信息,包含5371个类和37个软件特性。为此,我们进行了一项探索性调查,从不同的软件功能集合中产生了数十万个机器学习模型。从某种意义上说,这些模型是随机的,它们迅速地从整个特征池中选择特征。尽管绝大多数模型都是无效的,但是我们可以生成一些模型,这些模型可以产生准确的预测,从而从Java项目类中对缺陷进行分类。在这些精确的模型中,我们的结果表明,变化度量特征比熵或类级别度量更存在。我们将分析集中在对随机选择的有缺陷类别的排名高于随机选择的干净类别的模型上,准确率超过80%。我们还报告和讨论了一些有助于解释模型决策的特征。因此,我们的研究促进了对哪些特性支持预测这些项目中的缺陷的推理。最后,我们提出了我们的工作对从业者的影响。
{"title":"Failure of One, Fall of Many: An Exploratory Study of Software Features for Defect Prediction","authors":"G. E. D. Santos, Eduardo Figueiredo","doi":"10.1109/SCAM51674.2020.00016","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00016","url":null,"abstract":"Software defect prediction represents an area of interest in both academia and the software industry. Thus, software defects are prevalent in software development and might generate numerous difficulties for users and developers apart. The current literature offers multiple alternative approaches to predict the likelihood of defects in the source code. Most of these studies concentrate on predicting defects from a broad set of software features. As a result, the individual discriminating power of software features is still unknown as some perform well only with specific projects or metrics. In this study, we applied machine learning techniques in a popular dataset. This data has information about software defects in five Java projects, containing 5,371 classes and 37 software features. To this aim, we convey an exploratory investigation that produced hundreds of thousands of machine learning models from a diverse collection of software features. These models are random in the sense that they promptly select the features from the entire pool of features. Even though the immense majority of models are ineffective, we could produce several models that yield accurate predictions, thus classifying defects from Java project classes. Among these accurate models, our results indicate that change metric features are more present than entropy or class-level metrics. We concentrated our analysis on models that rank a randomly chosen defective class higher than a casually selected clean class with over 80% accuracy. We also report and discuss some features contributing to the explanation of model decisions. Therefore, our study promotes reasoning on which features support predicting defects in these projects. Finally, we present the implications of our work to practitioners.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114162176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Compositional Information Flow Analysis for WebAssembly Programs WebAssembly程序的组合信息流分析
Quentin Stiévenart, Coen De Roover
WebAssembly is a new W3C standard, providing a portable target for compilation for various languages. All major browsers can run WebAssembly programs, and its use extends beyond the web: there is interest in compiling cross-platform desktop applications, server applications, IoT and embedded applications to WebAssembly because of the performance and security guarantees it aims to provide. Indeed, WebAssembly has been carefully designed with security in mind. In particular, WebAssembly applications are sandboxed from their host environment. However, recent works have brought to light several limitations that expose WebAssembly to traditional attack vectors. Visitors of websites using WebAssembly have been exposed to malicious code as a result.In this paper, we propose an automated static program analysis to address these security concerns. Our analysis is focused on information flow and is compositional. For every WebAssembly function, it first computes a summary that describes in a sound manner where the information from its parameters and the global program state can flow to. These summaries can then be applied during the subsequent analysis of function calls. Through a classical fixed-point formulation, one obtains an approximation of the information flow in the WebAssembly program. This results in the first compositional static analysis for WebAssembly. On a set of 34 benchmark programs spanning 196kLOC of WebAssembly, we compute at least 64% of the function summaries precisely in less than a minute in total.
WebAssembly是一个新的W3C标准,为各种语言的编译提供了一个可移植的目标。所有主流浏览器都可以运行WebAssembly程序,而且它的使用范围超出了web:人们对编译跨平台桌面应用程序、服务器应用程序、物联网和嵌入式应用程序感兴趣,因为它旨在提供性能和安全保证。实际上,WebAssembly在设计时就考虑到了安全性。特别是,WebAssembly应用程序在其主机环境中被沙盒化。然而,最近的工作揭示了WebAssembly暴露于传统攻击向量的几个限制。使用WebAssembly的网站访问者因此暴露在恶意代码中。在本文中,我们提出一个自动化的静态程序分析来处理这些安全问题。我们的分析集中在信息流上,是构成性的。对于每个WebAssembly函数,它首先计算一个摘要,该摘要以合理的方式描述来自其参数和全局程序状态的信息可以流向何处。这些摘要可以在随后的函数调用分析中应用。通过经典的不动点公式,可以近似地得到WebAssembly程序中的信息流。这导致了WebAssembly的第一个组合静态分析。在一组34个跨越WebAssembly 196kLOC的基准程序中,我们在不到一分钟的时间内精确地计算了至少64%的函数摘要。
{"title":"Compositional Information Flow Analysis for WebAssembly Programs","authors":"Quentin Stiévenart, Coen De Roover","doi":"10.1109/SCAM51674.2020.00007","DOIUrl":"https://doi.org/10.1109/SCAM51674.2020.00007","url":null,"abstract":"WebAssembly is a new W3C standard, providing a portable target for compilation for various languages. All major browsers can run WebAssembly programs, and its use extends beyond the web: there is interest in compiling cross-platform desktop applications, server applications, IoT and embedded applications to WebAssembly because of the performance and security guarantees it aims to provide. Indeed, WebAssembly has been carefully designed with security in mind. In particular, WebAssembly applications are sandboxed from their host environment. However, recent works have brought to light several limitations that expose WebAssembly to traditional attack vectors. Visitors of websites using WebAssembly have been exposed to malicious code as a result.In this paper, we propose an automated static program analysis to address these security concerns. Our analysis is focused on information flow and is compositional. For every WebAssembly function, it first computes a summary that describes in a sound manner where the information from its parameters and the global program state can flow to. These summaries can then be applied during the subsequent analysis of function calls. Through a classical fixed-point formulation, one obtains an approximation of the information flow in the WebAssembly program. This results in the first compositional static analysis for WebAssembly. On a set of 34 benchmark programs spanning 196kLOC of WebAssembly, we compute at least 64% of the function summaries precisely in less than a minute in total.","PeriodicalId":410351,"journal":{"name":"2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"34 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126042669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1