首页 > 最新文献

2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)最新文献

英文 中文
Using Mutant Stubbornness to Create Minimal and Prioritized Test Sets 使用突变顽固创建最小和优先级测试集
Loreto Gonzalez-Hernandez, B. Lindström, A. Offutt, S. F. Andler, P. Potena, M. Bohlin
In testing, engineers want to run the most useful tests early (prioritization). When tests are run hundreds or thousands of times, minimizing a test set can result in significant savings (minimization). This paper proposes a new analysis technique to address both the minimal test set and the test case prioritization problems. This paper precisely defines the concept of mutant stubbornness, which is the basis for our analysis technique. We empirically compare our technique with other test case minimization and prioritization techniques in terms of the size of the minimized test sets and how quickly mutants are killed. We used seven C language subjects from the Siemens Repository, specifically the test sets and the killing matrices from a previous study. We used 30 different orders for each set and ran every technique 100 times over each set. Results show that our analysis technique performed significantly better than prior techniques for creating minimal test sets and was able to establish new bounds for all cases. Also, our analysis technique killed mutants as fast or faster than prior techniques. These results indicate that our mutant stubbornness technique constructs test sets that are both minimal in size, and prioritized effectively, as well or better than other techniques.
在测试中,工程师希望尽早运行最有用的测试(优先级)。当测试运行数百或数千次时,最小化测试集可以显著节省(最小化)。本文提出了一种新的分析技术来解决最小测试集和测试用例的优先级问题。本文精确地定义了突变体顽固性的概念,这是我们分析技术的基础。根据最小化测试集的大小和杀死突变体的速度,我们经验地将我们的技术与其他测试用例最小化和优先化技术进行比较。我们使用了来自Siemens Repository的7个C语言主题,特别是来自先前研究的测试集和终止矩阵。我们对每组使用了30种不同的订单,每种技术在每组上运行100次。结果表明,我们的分析技术在创建最小测试集方面的表现明显优于先前的技术,并且能够为所有情况建立新的界限。此外,我们的分析技术杀死突变体的速度与之前的技术一样快,甚至更快。这些结果表明,我们的突变顽固性技术构建的测试集既最小的大小,并有效地优先级,以及或优于其他技术。
{"title":"Using Mutant Stubbornness to Create Minimal and Prioritized Test Sets","authors":"Loreto Gonzalez-Hernandez, B. Lindström, A. Offutt, S. F. Andler, P. Potena, M. Bohlin","doi":"10.1109/QRS.2018.00058","DOIUrl":"https://doi.org/10.1109/QRS.2018.00058","url":null,"abstract":"In testing, engineers want to run the most useful tests early (prioritization). When tests are run hundreds or thousands of times, minimizing a test set can result in significant savings (minimization). This paper proposes a new analysis technique to address both the minimal test set and the test case prioritization problems. This paper precisely defines the concept of mutant stubbornness, which is the basis for our analysis technique. We empirically compare our technique with other test case minimization and prioritization techniques in terms of the size of the minimized test sets and how quickly mutants are killed. We used seven C language subjects from the Siemens Repository, specifically the test sets and the killing matrices from a previous study. We used 30 different orders for each set and ran every technique 100 times over each set. Results show that our analysis technique performed significantly better than prior techniques for creating minimal test sets and was able to establish new bounds for all cases. Also, our analysis technique killed mutants as fast or faster than prior techniques. These results indicate that our mutant stubbornness technique constructs test sets that are both minimal in size, and prioritized effectively, as well or better than other techniques.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131811077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An Empirical Study of the Impact of Code Smell on File Changes 代码气味对文件变更影响的实证研究
Can Zhu, Xiaofang Zhang, Yang Feng, Lin Chen
Code smells are considered to have negative impacts on software evolution and maintenance. Many researchers have conducted studies to investigate these effects and correlations. However, because code smells constantly change in the evolution, understanding these changes and the correlation between them and the operations of source code files is helpful for developers in maintenance. In this paper, on four popular Java projects with 58 release versions, we conduct an extensive empirical study to investigate the correlation between code smells and basic operations of source code files. We find that, the density of code smells decreases with the software evolution. The files containing smells have a higher likelihood to be modified while smells are not strongly correlated with adding or removing files. Furthermore, some certain smells have significant impact on file changes. These findings are helpful for developers to understand the evolution of code smells and better focus on quality assurance.
代码气味被认为对软件的发展和维护有负面影响。许多研究人员进行了研究,以调查这些影响和相关性。然而,由于代码在进化过程中不断变化,因此理解这些变化以及它们与源代码文件操作之间的相关性对开发人员进行维护是有帮助的。在本文中,我们对四个具有58个发行版本的流行Java项目进行了广泛的实证研究,以调查代码气味与源代码文件的基本操作之间的相关性。我们发现,随着软件的进化,代码气味的密度逐渐减小。包含气味的文件更有可能被修改,而气味与添加或删除文件没有很强的相关性。此外,某些气味对文件更改有重大影响。这些发现有助于开发人员理解代码气味的演变,并更好地关注质量保证。
{"title":"An Empirical Study of the Impact of Code Smell on File Changes","authors":"Can Zhu, Xiaofang Zhang, Yang Feng, Lin Chen","doi":"10.1109/QRS.2018.00037","DOIUrl":"https://doi.org/10.1109/QRS.2018.00037","url":null,"abstract":"Code smells are considered to have negative impacts on software evolution and maintenance. Many researchers have conducted studies to investigate these effects and correlations. However, because code smells constantly change in the evolution, understanding these changes and the correlation between them and the operations of source code files is helpful for developers in maintenance. In this paper, on four popular Java projects with 58 release versions, we conduct an extensive empirical study to investigate the correlation between code smells and basic operations of source code files. We find that, the density of code smells decreases with the software evolution. The files containing smells have a higher likelihood to be modified while smells are not strongly correlated with adding or removing files. Furthermore, some certain smells have significant impact on file changes. These findings are helpful for developers to understand the evolution of code smells and better focus on quality assurance.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129121575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Program Slicing-Based Bayesian Network Model for Change Impact Analysis 基于程序切片的变更影响分析贝叶斯网络模型
Ekincan Ufuktepe, Tugkan Tuglular
Change impact analysis plays an important role in identifying potential affected areas that are caused by changes that are made in a software. Most of the existing change impact analysis techniques are based on architectural design and change history. However, source code-based change impact analysis studies are very few and they have shown higher precision in their results. In this study, a static method-granularity level change impact analysis, that uses program slicing and Bayesian Network technique has been proposed. The technique proposes a directed graph model that also represents the call dependencies between methods. In this study, an open source Java project with 8999 to 9445 lines of code and from 505 to 528 methods have been analyzed through 32 commits it went. Recall and f-measure metrics have been used for evaluation of the precision of the proposed method, where each software commit has been analyzed separately.
变更影响分析在识别由软件中的变更引起的潜在受影响区域方面起着重要的作用。大多数现有的变更影响分析技术都是基于架构设计和变更历史的。然而,基于源代码的变更影响分析研究非常少,它们在结果中显示出更高的精度。本文提出了一种基于程序切片和贝叶斯网络技术的静态方法——粒度级变化影响分析。该技术提出了一个有向图模型,该模型还表示方法之间的调用依赖关系。在本研究中,通过32次提交,分析了一个包含8999到9445行代码和505到528个方法的开源Java项目。召回率和f-measure指标被用于评估所提出方法的精度,其中每个软件提交都被单独分析。
{"title":"A Program Slicing-Based Bayesian Network Model for Change Impact Analysis","authors":"Ekincan Ufuktepe, Tugkan Tuglular","doi":"10.1109/QRS.2018.00062","DOIUrl":"https://doi.org/10.1109/QRS.2018.00062","url":null,"abstract":"Change impact analysis plays an important role in identifying potential affected areas that are caused by changes that are made in a software. Most of the existing change impact analysis techniques are based on architectural design and change history. However, source code-based change impact analysis studies are very few and they have shown higher precision in their results. In this study, a static method-granularity level change impact analysis, that uses program slicing and Bayesian Network technique has been proposed. The technique proposes a directed graph model that also represents the call dependencies between methods. In this study, an open source Java project with 8999 to 9445 lines of code and from 505 to 528 methods have been analyzed through 32 commits it went. Recall and f-measure metrics have been used for evaluation of the precision of the proposed method, where each software commit has been analyzed separately.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127424755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
On the Suitability of a Portfolio-Based Design Improvement Approach 基于投资组合的设计改进方法的适用性研究
Johannes Bräuer, Reinhold Plösch, Christian Körner, Matthias Saft
The design debt metaphor tries to illustrate quality deficits in the design of a software and the impact thereof to the business value of the system. To pay off the debt, the literature offers various approaches for identifying and prioritizing these design flaws, but without proper support in aligning strategic improvement actions to the identified issues. This work addresses this challenge and examines the suitability of our proposed portfolio-based design assessment approach. Therefore, this investigation is conducted based on three case studies where the product source code was analyzed and assessed using our portfolio-based approach. As a result, the approach has proven to be able to recommend concrete and valuable design improvement actions that can be adapted to project constraints.
设计债比喻试图说明软件设计中的质量缺陷及其对系统业务价值的影响。为了偿还债务,文献提供了各种方法来识别和确定这些设计缺陷的优先级,但是没有适当的支持将战略改进行动与确定的问题结合起来。这项工作解决了这一挑战,并检查了我们提出的基于投资组合的设计评估方法的适用性。因此,这项调查是基于三个案例研究进行的,在这些案例中,使用我们基于投资组合的方法对产品源代码进行了分析和评估。因此,该方法已被证明能够推荐具体且有价值的设计改进行动,这些行动可以适应项目约束。
{"title":"On the Suitability of a Portfolio-Based Design Improvement Approach","authors":"Johannes Bräuer, Reinhold Plösch, Christian Körner, Matthias Saft","doi":"10.1109/QRS.2018.00038","DOIUrl":"https://doi.org/10.1109/QRS.2018.00038","url":null,"abstract":"The design debt metaphor tries to illustrate quality deficits in the design of a software and the impact thereof to the business value of the system. To pay off the debt, the literature offers various approaches for identifying and prioritizing these design flaws, but without proper support in aligning strategic improvement actions to the identified issues. This work addresses this challenge and examines the suitability of our proposed portfolio-based design assessment approach. Therefore, this investigation is conducted based on three case studies where the product source code was analyzed and assessed using our portfolio-based approach. As a result, the approach has proven to be able to recommend concrete and valuable design improvement actions that can be adapted to project constraints.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124069785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QRS 2018 Steering Committee QRS 2018指导委员会
W. Wong
{"title":"QRS 2018 Steering Committee","authors":"W. Wong","doi":"10.1109/qrs.2018.00008","DOIUrl":"https://doi.org/10.1109/qrs.2018.00008","url":null,"abstract":"","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124716947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Analysis of Complex Industrial Test Code Using Clone Analysis 用克隆分析分析复杂的工业测试代码
Wafa Hasanain, Y. Labiche, Sigrid Eldh
Many companies, including Ericsson, experience increased software verification costs. Agile cross-functional teams find it easy to make new additions of test cases for every change and fix. The consequence of this phenomenon is duplications of test code. In this paper, we perform an industrial case study that aims at better understanding such duplicated test fragments or as we call them, clones. In our study, 49% (LOC) of the entire test code are clones. The reported results include figures about clone frequencies, types, similarity, fragments, and size distributions, and the number of line differences in cloned test cases. It is challenging to keep clones consistent and remove unnecessary clones during the entire testing process of large-scale commercial software.
包括爱立信在内的许多公司都经历了软件验证成本的增加。敏捷的跨职能团队发现,为每次更改和修复添加新的测试用例是很容易的。这种现象的后果是测试代码的重复。在本文中,我们执行了一个工业案例研究,旨在更好地理解这些重复的测试片段,或者我们称之为克隆。在我们的研究中,整个测试代码的49% (LOC)是克隆的。报告的结果包括关于克隆频率、类型、相似性、片段和大小分布的数字,以及克隆测试用例中的行差异的数量。在大规模商业软件的整个测试过程中,保持克隆的一致性和删除不必要的克隆是一个挑战。
{"title":"An Analysis of Complex Industrial Test Code Using Clone Analysis","authors":"Wafa Hasanain, Y. Labiche, Sigrid Eldh","doi":"10.1109/QRS.2018.00061","DOIUrl":"https://doi.org/10.1109/QRS.2018.00061","url":null,"abstract":"Many companies, including Ericsson, experience increased software verification costs. Agile cross-functional teams find it easy to make new additions of test cases for every change and fix. The consequence of this phenomenon is duplications of test code. In this paper, we perform an industrial case study that aims at better understanding such duplicated test fragments or as we call them, clones. In our study, 49% (LOC) of the entire test code are clones. The reported results include figures about clone frequencies, types, similarity, fragments, and size distributions, and the number of line differences in cloned test cases. It is challenging to keep clones consistent and remove unnecessary clones during the entire testing process of large-scale commercial software.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134146718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An Automatic Parameterized Verification of FLASH Cache Coherence Protocol FLASH缓存一致性协议的自动参数化验证
Yongjian Li, Jialun Cao, Kaiqiang Duan
FLASH protocol is an industrial-scale cache coherence protocol, which is a challenging benchmark in the formal verification area. Verifying such protocol yields both scientific and commercial values. However, the complicated mechanism of protocols and the explosive searching states make it extremely hard to solve. An alternative solution is to carry out proof scripts combining manual work with a computer, which is adopted by most works in this area. However, this alternation makes the verification process neither effective nor rigorous. Therefore, in this paper, we elaborate the detailed process of how paraVerifier generates formal proofs automatically. It can generate a formal proof without manual works, and guarantee the rigorous correctness at the same time. Furthermore, we also illustrate the flow chart of READ and WRITE transactions in FLASH protocol, and analyze the semantics hiding behind the auto-searched invariants. We show that paraVerifier can not only automatically generate formal proofs, but offer comprehensive analyzing reports for better understanding.
FLASH协议是一种工业规模的缓存一致性协议,是形式化验证领域具有挑战性的基准。验证这种协议具有科学和商业价值。然而,复杂的协议机制和爆炸性的搜索状态使得该问题难以解决。另一种解决方案是将手工工作与计算机结合起来执行证明脚本,这是该领域大多数工作采用的方法。然而,这种改变使核查过程既不有效也不严格。因此,在本文中,我们详细阐述了paraVerifier如何自动生成形式证明的过程。它可以在不需要人工操作的情况下生成形式化证明,同时保证了证明的严格正确性。此外,我们还说明了FLASH协议中READ和WRITE事务的流程图,并分析了隐藏在自动搜索不变量背后的语义。我们证明了paraVerifier不仅可以自动生成形式化的证明,而且可以提供全面的分析报告,以便更好地理解。
{"title":"An Automatic Parameterized Verification of FLASH Cache Coherence Protocol","authors":"Yongjian Li, Jialun Cao, Kaiqiang Duan","doi":"10.1109/QRS.2018.00018","DOIUrl":"https://doi.org/10.1109/QRS.2018.00018","url":null,"abstract":"FLASH protocol is an industrial-scale cache coherence protocol, which is a challenging benchmark in the formal verification area. Verifying such protocol yields both scientific and commercial values. However, the complicated mechanism of protocols and the explosive searching states make it extremely hard to solve. An alternative solution is to carry out proof scripts combining manual work with a computer, which is adopted by most works in this area. However, this alternation makes the verification process neither effective nor rigorous. Therefore, in this paper, we elaborate the detailed process of how paraVerifier generates formal proofs automatically. It can generate a formal proof without manual works, and guarantee the rigorous correctness at the same time. Furthermore, we also illustrate the flow chart of READ and WRITE transactions in FLASH protocol, and analyze the semantics hiding behind the auto-searched invariants. We show that paraVerifier can not only automatically generate formal proofs, but offer comprehensive analyzing reports for better understanding.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"323 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133949543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiral^SRA: A Threat-Specific Security Risk Assessment Framework for the Cloud 螺旋^SRA:针对云的特定威胁的安全风险评估框架
A. Nhlabatsi, Jin B. Hong, Dong Seong Kim, Rachael Fernandez, Noora Fetais, K. Khan
Conventional security risk assessment approaches for cloud infrastructures do not explicitly consider risk with respect to specific threats. This is a challenge for a cloud provider because it may apply the same risk assessment approach in assessing the risk of all of its clients. In practice, the threats faced by each client may vary depending on their security requirements. The cloud provider may also apply generic mitigation strategies that are not guaranteed to be effective in thwarting specific threats for different clients. This paper proposes a threat-specific risk assessment framework which evaluates the risk with respect to specific threats by considering only those threats that are relevant to a particular cloud client. The risk assessment process is divided into three phases which have inter-related activities arranged in a spiral. Application of the framework to a cloud deployment case study shows that considering risk with respect to specific threats leads to a more accurate quantification of security risk. Although our framework is motivated by risk assessment challenges in the cloud it can be applied in any network environment.
传统的云基础设施安全风险评估方法并未明确考虑与特定威胁相关的风险。这对云提供商来说是一个挑战,因为它可能在评估所有客户的风险时应用相同的风险评估方法。在实践中,每个客户端面临的威胁可能因其安全需求而异。云提供商还可以应用通用缓解策略,这些策略不能保证有效地阻止针对不同客户的特定威胁。本文提出了一个特定于威胁的风险评估框架,该框架通过仅考虑与特定云客户端相关的威胁来评估与特定威胁相关的风险。风险评估过程分为三个阶段,每个阶段都有相互关联的活动以螺旋形排列。该框架在云部署案例研究中的应用表明,考虑与特定威胁相关的风险可以更准确地量化安全风险。尽管我们的框架是由云中的风险评估挑战驱动的,但它可以应用于任何网络环境。
{"title":"Spiral^SRA: A Threat-Specific Security Risk Assessment Framework for the Cloud","authors":"A. Nhlabatsi, Jin B. Hong, Dong Seong Kim, Rachael Fernandez, Noora Fetais, K. Khan","doi":"10.1109/QRS.2018.00049","DOIUrl":"https://doi.org/10.1109/QRS.2018.00049","url":null,"abstract":"Conventional security risk assessment approaches for cloud infrastructures do not explicitly consider risk with respect to specific threats. This is a challenge for a cloud provider because it may apply the same risk assessment approach in assessing the risk of all of its clients. In practice, the threats faced by each client may vary depending on their security requirements. The cloud provider may also apply generic mitigation strategies that are not guaranteed to be effective in thwarting specific threats for different clients. This paper proposes a threat-specific risk assessment framework which evaluates the risk with respect to specific threats by considering only those threats that are relevant to a particular cloud client. The risk assessment process is divided into three phases which have inter-related activities arranged in a spiral. Application of the framework to a cloud deployment case study shows that considering risk with respect to specific threats leads to a more accurate quantification of security risk. Although our framework is motivated by risk assessment challenges in the cloud it can be applied in any network environment.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133622346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hypervisor-Based Sensitive Data Leakage Detector 基于hypervisor的敏感数据泄漏检测器
Shu-Hao Chang, S. Mallissery, Chih-Hao Hsieh, Yu-Sung Wu
Sensitive Data Leakage (SDL) is a major issue faced by organizations due to increasing reliance on data-driven decision-making. Existing Data Leakage Prevention (DLP) solutions are being challenged by the adoption of network transport encryption and the presence of privileged-mode malware designed to tamper with the DLP agent programs. We propose a novel DLP system called "HyperSweep" that uses Virtual Machine Memory Introspection (VMI) technology to inspect the memory content of a guest system for sensitive information. The approach is robust against both network transport encryption and malware that attack DLP agent programs. The HyperSweep prototype is implemented on top of the KVM hypervisor. Our experiments have confirmed its applicability to real-world applications, including web browsers, office applications, and social networking applications. The experiments also indicate moderate performance overhead from applying HyperSweep.
由于越来越依赖数据驱动的决策,敏感数据泄漏(SDL)是组织面临的一个主要问题。现有的数据泄漏预防(DLP)解决方案正受到网络传输加密的采用和设计用于篡改DLP代理程序的特权模式恶意软件的挑战。我们提出了一种新的DLP系统,称为“hyperssweep”,它使用虚拟机内存自省(VMI)技术来检查客户系统的内存内容以获取敏感信息。该方法对网络传输加密和攻击DLP代理程序的恶意软件都具有鲁棒性。HyperSweep原型是在KVM管理程序之上实现的。我们的实验证实了它对现实世界应用程序的适用性,包括web浏览器、办公应用程序和社交网络应用程序。实验还表明,应用hyperssweep会带来适度的性能开销。
{"title":"Hypervisor-Based Sensitive Data Leakage Detector","authors":"Shu-Hao Chang, S. Mallissery, Chih-Hao Hsieh, Yu-Sung Wu","doi":"10.1109/QRS.2018.00029","DOIUrl":"https://doi.org/10.1109/QRS.2018.00029","url":null,"abstract":"Sensitive Data Leakage (SDL) is a major issue faced by organizations due to increasing reliance on data-driven decision-making. Existing Data Leakage Prevention (DLP) solutions are being challenged by the adoption of network transport encryption and the presence of privileged-mode malware designed to tamper with the DLP agent programs. We propose a novel DLP system called \"HyperSweep\" that uses Virtual Machine Memory Introspection (VMI) technology to inspect the memory content of a guest system for sensitive information. The approach is robust against both network transport encryption and malware that attack DLP agent programs. The HyperSweep prototype is implemented on top of the KVM hypervisor. Our experiments have confirmed its applicability to real-world applications, including web browsers, office applications, and social networking applications. The experiments also indicate moderate performance overhead from applying HyperSweep.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134550384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SynEva: Evaluating ML Programs by Mirror Program Synthesis SynEva:通过镜像程序合成评估ML程序
Yi Qin, Huiyan Wang, Chang Xu, Xiaoxing Ma, Jian Lu
Machine learning (ML) programs are being widely used in various human-related applications. However, their testing always remains to be a challenging problem, and one can hardly decide whether and how the existing knowledge extracted from training scenarios suit new scenarios. Existing approaches typically have restricted usages due to their assumptions on the availability of an oracle, comparable implementation, or manual inspection efforts. We solve this problem by proposing a novel program synthesis based approach, SynEva, that can systematically construct an oracle-alike mirror program for similarity measurement, and automatically compare it with the existing knowledge on new scenarios to decide how the knowledge suits the new scenarios. SynEva is lightweight and fully automated. Our experimental evaluation with real-world data sets validates SynEva's effectiveness by strong correlation and little overhead results. We expect that SynEva can apply to, and help evaluate, more ML programs for new scenarios.
机器学习(ML)程序被广泛应用于各种与人类相关的应用中。然而,它们的测试一直是一个具有挑战性的问题,人们很难确定从训练场景中提取的现有知识是否适用于新场景,以及如何适用于新场景。由于对oracle的可用性、可比较的实现或人工检查工作的假设,现有方法的使用通常受到限制。为了解决这一问题,我们提出了一种基于程序综合的新方法SynEva,该方法可以系统地构建类似于oracle的镜像程序进行相似性度量,并自动将其与新场景下的现有知识进行比较,以确定知识如何适合新场景。SynEva重量轻,完全自动化。我们对真实世界数据集的实验评估通过强相关性和小开销结果验证了SynEva的有效性。我们希望SynEva能够为新的场景申请并帮助评估更多的ML程序。
{"title":"SynEva: Evaluating ML Programs by Mirror Program Synthesis","authors":"Yi Qin, Huiyan Wang, Chang Xu, Xiaoxing Ma, Jian Lu","doi":"10.1109/QRS.2018.00031","DOIUrl":"https://doi.org/10.1109/QRS.2018.00031","url":null,"abstract":"Machine learning (ML) programs are being widely used in various human-related applications. However, their testing always remains to be a challenging problem, and one can hardly decide whether and how the existing knowledge extracted from training scenarios suit new scenarios. Existing approaches typically have restricted usages due to their assumptions on the availability of an oracle, comparable implementation, or manual inspection efforts. We solve this problem by proposing a novel program synthesis based approach, SynEva, that can systematically construct an oracle-alike mirror program for similarity measurement, and automatically compare it with the existing knowledge on new scenarios to decide how the knowledge suits the new scenarios. SynEva is lightweight and fully automated. Our experimental evaluation with real-world data sets validates SynEva's effectiveness by strong correlation and little overhead results. We expect that SynEva can apply to, and help evaluate, more ML programs for new scenarios.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117343923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1