首页 > 最新文献

2013 35th International Conference on Software Engineering (ICSE)最新文献

英文 中文
1st International workshop on combining modelling and search-based software engineering (CMSBSE 2013) 第一届建模与基于搜索的软件工程结合国际研讨会(CMSBSE 2013)
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606763
M. Harman, R. Paige, James R. Williams
Modelling plays a vital and pervasive role in software engineering: it provides means to manage complexity via abstraction, and enables the creation of larger, more complex systems. Search-based software engineering (SBSE) offers a productive and proven approach to software engineering through automated discovery of near-optimal solutions to problems, and has proven itself to be effective on a wide variety of software-and systems engineering problems. CMSBSE 2013 was a forum allowing researchers from both communities to meet, discuss synergies and differences, and present topics related to the intersection of search and modelling. Particular goals of CMSBSE were to highlight that SBSE and modelling have substantial conceptual and technical synergy, and to identify and present opportunities in which they can be combined, whilst also aiming to grow the community working in this area.
建模在软件工程中扮演着至关重要和普遍的角色:它提供了通过抽象来管理复杂性的方法,并使创建更大、更复杂的系统成为可能。基于搜索的软件工程(SBSE)通过自动发现问题的近乎最优的解决方案,为软件工程提供了一种高效且经过验证的方法,并且已经证明自己在各种软件和系统工程问题上是有效的。CMSBSE 2013是一个论坛,允许来自两个社区的研究人员会面,讨论协同作用和差异,并提出与搜索和建模交叉相关的主题。CMSBSE的特定目标是强调SBSE和建模具有实质性的概念和技术协同作用,并确定和提供可以将它们结合起来的机会,同时也旨在发展在这一领域工作的社区。
{"title":"1st International workshop on combining modelling and search-based software engineering (CMSBSE 2013)","authors":"M. Harman, R. Paige, James R. Williams","doi":"10.1109/ICSE.2013.6606763","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606763","url":null,"abstract":"Modelling plays a vital and pervasive role in software engineering: it provides means to manage complexity via abstraction, and enables the creation of larger, more complex systems. Search-based software engineering (SBSE) offers a productive and proven approach to software engineering through automated discovery of near-optimal solutions to problems, and has proven itself to be effective on a wide variety of software-and systems engineering problems. CMSBSE 2013 was a forum allowing researchers from both communities to meet, discuss synergies and differences, and present topics related to the intersection of search and modelling. Particular goals of CMSBSE were to highlight that SBSE and modelling have substantial conceptual and technical synergy, and to identify and present opportunities in which they can be combined, whilst also aiming to grow the community working in this area.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131829693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mining SQL injection and cross site scripting vulnerabilities using hybrid program analysis 使用混合程序分析挖掘SQL注入和跨站点脚本漏洞
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606610
Lwin Khin Shar, Hee Beng Kuan Tan, L. Briand
In previous work, we proposed a set of static attributes that characterize input validation and input sanitization code patterns. We showed that some of the proposed static attributes are significant predictors of SQL injection and cross site scripting vulnerabilities. Static attributes have the advantage of reflecting general properties of a program. Yet, dynamic attributes collected from execution traces may reflect more specific code characteristics that are complementary to static attributes. Hence, to improve our initial work, in this paper, we propose the use of dynamic attributes to complement static attributes in vulnerability prediction. Furthermore, since existing work relies on supervised learning, it is dependent on the availability of training data labeled with known vulnerabilities. This paper presents prediction models that are based on both classification and clustering in order to predict vulnerabilities, working in the presence or absence of labeled training data, respectively. In our experiments across six applications, our new supervised vulnerability predictors based on hybrid (static and dynamic) attributes achieved, on average, 90% recall and 85% precision, that is a sharp increase in recall when compared to static analysis-based predictions. Though not nearly as accurate, our unsupervised predictors based on clustering achieved, on average, 76% recall and 39% precision, thus suggesting they can be useful in the absence of labeled training data.
在之前的工作中,我们提出了一组描述输入验证和输入清理代码模式的静态属性。我们展示了一些建议的静态属性是SQL注入和跨站点脚本漏洞的重要预测因素。静态属性具有反映程序一般属性的优点。然而,从执行跟踪中收集的动态属性可能反映出与静态属性互补的更具体的代码特征。因此,为了改进我们的前期工作,在本文中,我们提出在漏洞预测中使用动态属性来补充静态属性。此外,由于现有的工作依赖于监督学习,它依赖于标记有已知漏洞的训练数据的可用性。本文提出了基于分类和聚类的预测模型,以预测漏洞,分别在有或没有标记训练数据的情况下工作。在我们对六个应用程序的实验中,我们基于混合(静态和动态)属性的新监督漏洞预测器平均实现了90%的召回率和85%的准确率,与基于静态分析的预测相比,召回率大幅提高。虽然没有那么准确,但我们基于聚类的无监督预测器平均达到了76%的召回率和39%的准确率,因此表明它们在没有标记训练数据的情况下是有用的。
{"title":"Mining SQL injection and cross site scripting vulnerabilities using hybrid program analysis","authors":"Lwin Khin Shar, Hee Beng Kuan Tan, L. Briand","doi":"10.1109/ICSE.2013.6606610","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606610","url":null,"abstract":"In previous work, we proposed a set of static attributes that characterize input validation and input sanitization code patterns. We showed that some of the proposed static attributes are significant predictors of SQL injection and cross site scripting vulnerabilities. Static attributes have the advantage of reflecting general properties of a program. Yet, dynamic attributes collected from execution traces may reflect more specific code characteristics that are complementary to static attributes. Hence, to improve our initial work, in this paper, we propose the use of dynamic attributes to complement static attributes in vulnerability prediction. Furthermore, since existing work relies on supervised learning, it is dependent on the availability of training data labeled with known vulnerabilities. This paper presents prediction models that are based on both classification and clustering in order to predict vulnerabilities, working in the presence or absence of labeled training data, respectively. In our experiments across six applications, our new supervised vulnerability predictors based on hybrid (static and dynamic) attributes achieved, on average, 90% recall and 85% precision, that is a sharp increase in recall when compared to static analysis-based predictions. Though not nearly as accurate, our unsupervised predictors based on clustering achieved, on average, 76% recall and 39% precision, thus suggesting they can be useful in the absence of labeled training data.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128324283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
Increasing anomaly handling efficiency in large organizations using applied machine learning 使用应用机器学习提高大型组织的异常处理效率
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606717
Leif Jonsson
Maintenance costs can be substantial for large organizations (several hundreds of programmers) with very large and complex software systems. By large we mean lines of code in the range of hundreds of thousands or millions. Our research objective is to improve the process of handling anomaly reports for large organizations. Specifically, we are addressing the problem of the manual, laborious and time consuming process of assigning anomaly reports to the correct design teams and the related issue of localizing faults in the system architecture. In large organizations, with complex systems, this is particularly problematic because the receiver of an anomaly report may not have detailed knowledge of the whole system. As a consequence, anomaly reports may be assigned to the wrong team in the organization, causing delays and unnecessary work. We have so far developed two machine learning prototypes to validate our approach. The latest, a re-implementation and extension, of the first is being evaluated on four large systems at Ericsson AB. Our main goal is to investigate how large software development organizations can significantly improve development efficiency by replacing manual anomaly report assignment and fault localization with machine learning techniques. Our approach focuses on training machine learning systems on anomaly report databases; this is in contrast to many other approaches that are based on test case execution combined with program sampling and/or source code analysis.
对于拥有非常庞大和复杂软件系统的大型组织(数百名程序员)来说,维护成本可能是相当大的。这里的“大”指的是数十万或数百万行的代码行。我们的研究目标是改进大型组织处理异常报告的过程。具体地说,我们正在处理将异常报告分配给正确的设计团队的手动、费力和耗时的过程的问题,以及在系统架构中定位故障的相关问题。在具有复杂系统的大型组织中,这尤其成问题,因为异常报告的接收者可能不了解整个系统的详细信息。因此,异常报告可能被分配给组织中错误的团队,从而导致延迟和不必要的工作。到目前为止,我们已经开发了两个机器学习原型来验证我们的方法。最新的,第一个的重新实现和扩展,正在爱立信公司的四个大型系统上进行评估。我们的主要目标是调查大型软件开发组织如何通过用机器学习技术取代手动异常报告分配和故障定位来显着提高开发效率。我们的方法侧重于在异常报告数据库上训练机器学习系统;这与许多其他基于测试用例执行与程序采样和/或源代码分析相结合的方法形成对比。
{"title":"Increasing anomaly handling efficiency in large organizations using applied machine learning","authors":"Leif Jonsson","doi":"10.1109/ICSE.2013.6606717","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606717","url":null,"abstract":"Maintenance costs can be substantial for large organizations (several hundreds of programmers) with very large and complex software systems. By large we mean lines of code in the range of hundreds of thousands or millions. Our research objective is to improve the process of handling anomaly reports for large organizations. Specifically, we are addressing the problem of the manual, laborious and time consuming process of assigning anomaly reports to the correct design teams and the related issue of localizing faults in the system architecture. In large organizations, with complex systems, this is particularly problematic because the receiver of an anomaly report may not have detailed knowledge of the whole system. As a consequence, anomaly reports may be assigned to the wrong team in the organization, causing delays and unnecessary work. We have so far developed two machine learning prototypes to validate our approach. The latest, a re-implementation and extension, of the first is being evaluated on four large systems at Ericsson AB. Our main goal is to investigate how large software development organizations can significantly improve development efficiency by replacing manual anomaly report assignment and fault localization with machine learning techniques. Our approach focuses on training machine learning systems on anomaly report databases; this is in contrast to many other approaches that are based on test case execution combined with program sampling and/or source code analysis.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123153819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Bridging the gap between the total and additional test-case prioritization strategies 弥合总测试用例优先级策略和附加测试用例优先级策略之间的差距
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606565
Lingming Zhang, Dan Hao, Lu Zhang, G. Rothermel, Hong Mei
In recent years, researchers have intensively investigated various topics in test-case prioritization, which aims to re-order test cases to increase the rate of fault detection during regression testing. The total and additional prioritization strategies, which prioritize based on total numbers of elements covered per test, and numbers of additional (not-yet-covered) elements covered per test, are two widely-adopted generic strategies used for such prioritization. This paper proposes a basic model and an extended model that unify the total strategy and the additional strategy. Our models yield a spectrum of generic strategies ranging between the total and additional strategies, depending on a parameter referred to as the p value. We also propose four heuristics to obtain differentiated p values for different methods under test. We performed an empirical study on 19 versions of four Java programs to explore our results. Our results demonstrate that wide ranges of strategies in our basic and extended models with uniform p values can significantly outperform both the total and additional strategies. In addition, our results also demonstrate that using differentiated p values for both the basic and extended models with method coverage can even outperform the additional strategy using statement coverage.
近年来,研究人员对测试用例优先级进行了深入的研究,其目的是对测试用例进行重新排序,以提高回归测试过程中的故障检测率。总优先级和附加优先级策略,基于每个测试覆盖的元素的总数,以及每个测试覆盖的附加(尚未覆盖的)元素的数量进行优先级排序,是用于这种优先级排序的两种广泛采用的通用策略。本文提出了一个统一了总策略和附加策略的基本模型和扩展模型。我们的模型根据一个被称为p值的参数,产生了一系列介于总策略和附加策略之间的通用策略。我们还提出了四种启发式方法来获得不同测试方法的微分p值。我们对四个Java程序的19个版本进行了实证研究,以探索我们的结果。我们的研究结果表明,在我们的基本和扩展模型中,具有统一p值的大范围策略可以显着优于总策略和附加策略。此外,我们的结果还表明,对具有方法覆盖率的基本模型和扩展模型使用不同的p值甚至可以胜过使用语句覆盖率的附加策略。
{"title":"Bridging the gap between the total and additional test-case prioritization strategies","authors":"Lingming Zhang, Dan Hao, Lu Zhang, G. Rothermel, Hong Mei","doi":"10.1109/ICSE.2013.6606565","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606565","url":null,"abstract":"In recent years, researchers have intensively investigated various topics in test-case prioritization, which aims to re-order test cases to increase the rate of fault detection during regression testing. The total and additional prioritization strategies, which prioritize based on total numbers of elements covered per test, and numbers of additional (not-yet-covered) elements covered per test, are two widely-adopted generic strategies used for such prioritization. This paper proposes a basic model and an extended model that unify the total strategy and the additional strategy. Our models yield a spectrum of generic strategies ranging between the total and additional strategies, depending on a parameter referred to as the p value. We also propose four heuristics to obtain differentiated p values for different methods under test. We performed an empirical study on 19 versions of four Java programs to explore our results. Our results demonstrate that wide ranges of strategies in our basic and extended models with uniform p values can significantly outperform both the total and additional strategies. In addition, our results also demonstrate that using differentiated p values for both the basic and extended models with method coverage can even outperform the additional strategy using statement coverage.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133912683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 153
Deciphering the story of software development through frequent pattern mining 通过频繁的模式挖掘来解读软件开发的故事
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606677
Nicolas Bettenburg, Andrew Begel
Software teams record their work progress in task repositories which often require them to encode their activities in a set of edits to field values in a form-based user interface. When others read the tasks, they must decode the schema used to write the activities down. We interviewed four software teams and found out how they used the task repository fields to record their work activities. However, we also found that they had trouble interpreting task revisions that encoded for multiple activities at the same time. To assist engineers in decoding tasks, we developed a scalable method based on frequent pattern mining to identify patterns of frequently co-edited fields that each represent a conceptual work activity. We applied our method to our two years of our interviewee's task repositories and were able to abstract 83,000 field changes into just 27 patterns that cover 95% of the task revisions. We used the 27 patterns to render the teams' tasks in web-based English newsfeeds and evaluated them with the product teams. The team agreed with most of our patterns and English interpretations, but outlined a number of improvements that we will incorporate into future work.
软件团队在任务存储库中记录他们的工作进度,这通常要求他们在基于表单的用户界面中对字段值的一组编辑中编码他们的活动。当其他人阅读任务时,他们必须解码用于将活动写下来的模式。我们采访了四个软件团队,并发现他们如何使用任务存储库字段来记录他们的工作活动。然而,我们也发现他们在解释同时为多个活动编码的任务修订时有困难。为了帮助工程师解码任务,我们开发了一种基于频繁模式挖掘的可扩展方法,以识别频繁共同编辑的字段的模式,每个字段代表一个概念性的工作活动。我们将我们的方法应用到两年的受访者任务存储库中,并且能够将83,000个字段更改抽象为27个模式,这些模式覆盖了95%的任务修订。我们使用这27种模式在基于web的英语新闻提要中呈现团队的任务,并与产品团队一起对其进行评估。团队同意我们的大部分模式和英语解释,但概述了一些改进,我们将在未来的工作中纳入。
{"title":"Deciphering the story of software development through frequent pattern mining","authors":"Nicolas Bettenburg, Andrew Begel","doi":"10.1109/ICSE.2013.6606677","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606677","url":null,"abstract":"Software teams record their work progress in task repositories which often require them to encode their activities in a set of edits to field values in a form-based user interface. When others read the tasks, they must decode the schema used to write the activities down. We interviewed four software teams and found out how they used the task repository fields to record their work activities. However, we also found that they had trouble interpreting task revisions that encoded for multiple activities at the same time. To assist engineers in decoding tasks, we developed a scalable method based on frequent pattern mining to identify patterns of frequently co-edited fields that each represent a conceptual work activity. We applied our method to our two years of our interviewee's task repositories and were able to abstract 83,000 field changes into just 27 patterns that cover 95% of the task revisions. We used the 27 patterns to render the teams' tasks in web-based English newsfeeds and evaluated them with the product teams. The team agreed with most of our patterns and English interpretations, but outlined a number of improvements that we will incorporate into future work.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130827255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Analysis of user comments: An approach for software requirements evolution 用户评论分析:软件需求演变的一种方法
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606604
Laura V. Galvis Carreño, K. Winbladh
User feedback is imperative in improving software quality. In this paper, we explore the rich set of user feedback available for third party mobile applications as a way to extract new/changed requirements for next versions. A potential problem using this data is its volume and the time commitment involved in extracting new/changed requirements. Our goal is to alleviate part of the process through automatic topic extraction. We process user comments to extract the main topics mentioned as well as some sentences representative of those topics. This information can be useful for requirements engineers to revise the requirements for next releases. Our approach relies on adapting information retrieval techniques including topic modeling and evaluating them on different publicly available data sets. Results show that the automatically extracted topics match the manually extracted ones, while also significantly decreasing the manual effort.
用户反馈是提高软件质量的必要条件。在本文中,我们探索了第三方移动应用程序可用的丰富用户反馈集,作为提取下一个版本的新/更改需求的一种方式。使用这些数据的潜在问题是其数量和提取新/更改需求所涉及的时间。我们的目标是通过自动主题提取来减轻这个过程的一部分。我们对用户评论进行处理,以提取提到的主要主题以及代表这些主题的一些句子。这些信息对于需求工程师修改下一个版本的需求是有用的。我们的方法依赖于自适应信息检索技术,包括主题建模和在不同的公共可用数据集上评估它们。结果表明,自动提取的主题与人工提取的主题相匹配,同时大大减少了人工的工作量。
{"title":"Analysis of user comments: An approach for software requirements evolution","authors":"Laura V. Galvis Carreño, K. Winbladh","doi":"10.1109/ICSE.2013.6606604","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606604","url":null,"abstract":"User feedback is imperative in improving software quality. In this paper, we explore the rich set of user feedback available for third party mobile applications as a way to extract new/changed requirements for next versions. A potential problem using this data is its volume and the time commitment involved in extracting new/changed requirements. Our goal is to alleviate part of the process through automatic topic extraction. We process user comments to extract the main topics mentioned as well as some sentences representative of those topics. This information can be useful for requirements engineers to revise the requirements for next releases. Our approach relies on adapting information retrieval techniques including topic modeling and evaluating them on different publicly available data sets. Results show that the automatically extracted topics match the manually extracted ones, while also significantly decreasing the manual effort.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132329381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 308
3rd International workshop on collaborative teaching of globally distributed software development (CTGDSD 2013) 第三届全球分布式软件开发协同教学国际研讨会(CTGDSD 2013)
Pub Date : 2013-05-18 DOI: 10.5555/2486788.2487061
S. Faulk, M. Young, R. Prikladnicki, D. Weiss, Lian Yu
Software engineering project courses where student teams are geographically distributed can effectively simulate the problems of globally distributed software development (DSD). However, this pedagogical model has proven difficult to adopt or sustain. It requires significant pedagogical resources and collaboration infrastructure. Institutionalizing such courses also requires compatible and reliable teaching partners. The purpose of this workshop is to continue building on our outreach efforts to foster a community of international faculty and institutions committed to developing, teaching and researching DSD. Foundational materials presented will include pedagogical materials and infrastructure developed and used in teaching DSD courses along with results and lessons learned. The third CTGDSD workshop will also focus on publishing workshop results and collaborating with the larger DSD community. Longrange goals include: lowering adoption barriers by providing common pedagogical materials, collaboration infrastructure, and a pool of potential teaching partners from around the globe.
学生团队地理分布的软件工程项目课程可以有效地模拟全球分布式软件开发(DSD)的问题。然而,这种教学模式已被证明难以采用或维持。它需要大量的教学资源和协作基础设施。使这些课程制度化还需要兼容和可靠的教学伙伴。本次研讨会的目的是继续加强我们的外展工作,以促进一个致力于发展、教学和研究DSD的国际教师和机构的社区。基础材料将包括教学材料和基础设施开发和使用的教学DSD课程以及结果和经验教训。第三届CTGDSD研讨会还将侧重于发布研讨会成果并与更大的DSD社区合作。长期目标包括:通过提供通用的教学材料、协作基础设施和来自全球的潜在教学合作伙伴来降低采用障碍。
{"title":"3rd International workshop on collaborative teaching of globally distributed software development (CTGDSD 2013)","authors":"S. Faulk, M. Young, R. Prikladnicki, D. Weiss, Lian Yu","doi":"10.5555/2486788.2487061","DOIUrl":"https://doi.org/10.5555/2486788.2487061","url":null,"abstract":"Software engineering project courses where student teams are geographically distributed can effectively simulate the problems of globally distributed software development (DSD). However, this pedagogical model has proven difficult to adopt or sustain. It requires significant pedagogical resources and collaboration infrastructure. Institutionalizing such courses also requires compatible and reliable teaching partners. The purpose of this workshop is to continue building on our outreach efforts to foster a community of international faculty and institutions committed to developing, teaching and researching DSD. Foundational materials presented will include pedagogical materials and infrastructure developed and used in teaching DSD courses along with results and lessons learned. The third CTGDSD workshop will also focus on publishing workshop results and collaborating with the larger DSD community. Longrange goals include: lowering adoption barriers by providing common pedagogical materials, collaboration infrastructure, and a pool of potential teaching partners from around the globe.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113966997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring the forensic-ability of audit logs for nonrepudiation 衡量审计日志的不可否认性的取证能力
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606732
J. King
Forensic analysis of software log files is used to extract user behavior profiles, detect fraud, and check compliance with policies and regulations. Software systems maintain several types of log files for different purposes. For example, a system may maintain logs for debugging, monitoring application performance, and/or tracking user access to system resources. The objective of my research is to develop and validate a minimum set of log file attributes and software security metrics for user nonrepudiation by measuring the degree to which a given audit log file captures the data necessary to allow for meaningful forensic analysis of user behavior within the software system. For a log to enable user nonrepudiation, the log file must record certain data fields, such as a unique user identifier. The log must also record relevant user activity, such as creating, viewing, updating, and deleting system resources, as well as software security events, such as the addition or revocation of user privileges. Using a grounded theory method, I propose a methodology for observing the current state of activity logging mechanisms in healthcare, education, and finance, then I quantify differences between activity logs and logs not specifically intended to capture user activity. I will then propose software security metrics for quantifying the forensic-ability of log files. I will evaluate my work with empirical analysis by comparing the performance of my metrics on several types of log files, including both activity logs and logs not directly intended to record user activity. My research will help software developers strengthen user activity logs for facilitating forensic analysis for user nonrepudiation.
软件日志文件的取证分析用于提取用户行为配置文件、检测欺诈以及检查策略和法规的遵从性。软件系统为不同的目的维护几种类型的日志文件。例如,系统可能为了调试、监视应用程序性能和/或跟踪用户对系统资源的访问而维护日志。我的研究目标是通过测量给定审计日志文件捕获必要数据的程度来开发和验证用户不可否认性的日志文件属性和软件安全度量的最小集,以便对软件系统内的用户行为进行有意义的取证分析。对于启用用户不可否认性的日志,日志文件必须记录某些数据字段,例如唯一的用户标识符。日志还必须记录相关的用户活动,例如创建、查看、更新和删除系统资源,以及软件安全事件,例如添加或撤销用户特权。使用基于理论的方法,我提出了一种方法,用于观察医疗保健、教育和金融领域活动日志机制的当前状态,然后我量化了活动日志和非专门用于捕获用户活动的日志之间的差异。然后,我将提出用于量化日志文件取证能力的软件安全度量。我将通过比较几种日志文件类型(包括活动日志和不直接记录用户活动的日志)上的度量标准的性能,用经验分析来评估我的工作。我的研究将帮助软件开发人员加强用户活动日志,以促进对用户不可否认性的取证分析。
{"title":"Measuring the forensic-ability of audit logs for nonrepudiation","authors":"J. King","doi":"10.1109/ICSE.2013.6606732","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606732","url":null,"abstract":"Forensic analysis of software log files is used to extract user behavior profiles, detect fraud, and check compliance with policies and regulations. Software systems maintain several types of log files for different purposes. For example, a system may maintain logs for debugging, monitoring application performance, and/or tracking user access to system resources. The objective of my research is to develop and validate a minimum set of log file attributes and software security metrics for user nonrepudiation by measuring the degree to which a given audit log file captures the data necessary to allow for meaningful forensic analysis of user behavior within the software system. For a log to enable user nonrepudiation, the log file must record certain data fields, such as a unique user identifier. The log must also record relevant user activity, such as creating, viewing, updating, and deleting system resources, as well as software security events, such as the addition or revocation of user privileges. Using a grounded theory method, I propose a methodology for observing the current state of activity logging mechanisms in healthcare, education, and finance, then I quantify differences between activity logs and logs not specifically intended to capture user activity. I will then propose software security metrics for quantifying the forensic-ability of log files. I will evaluate my work with empirical analysis by comparing the performance of my metrics on several types of log files, including both activity logs and logs not directly intended to record user activity. My research will help software developers strengthen user activity logs for facilitating forensic analysis for user nonrepudiation.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114837031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Detecting deadlock in programs with data-centric synchronization 检测以数据为中心的同步程序中的死锁
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606578
Christian Hammer
Previously, we developed a data-centric approach to concurrency control in which programmers specify synchronization constraints declaratively, by grouping shared locations into atomic sets. We implemented our ideas in a Java extension called AJ, using Java locks to implement synchronization. We proved that atomicity violations are prevented by construction, and demonstrated that realistic Java programs can be refactored into AJ without significant loss of performance. This paper presents an algorithm for detecting possible deadlock in AJ programs by ordering the locks associated with atomic sets. In our approach, a type-based static analysis is extended to handle recursive data structures by considering programmer-supplied, compiler-verified lock ordering annotations. In an evaluation of the algorithm, all 10 AJ programs under consideration were shown to be deadlock-free. One program needed 4 ordering annotations and 2 others required minor refactorings. For the remaining 7 programs, no programmer intervention of any kind was required.
以前,我们开发了一种以数据为中心的并发控制方法,在这种方法中,程序员通过将共享位置分组到原子集来声明性地指定同步约束。我们在一个名为AJ的Java扩展中实现了我们的想法,使用Java锁来实现同步。我们证明了原子性违反可以通过构造来防止,并证明了实际的Java程序可以重构到AJ中,而不会造成显著的性能损失。本文提出了一种通过排序与原子集相关的锁来检测AJ程序中可能出现的死锁的算法。在我们的方法中,基于类型的静态分析被扩展为通过考虑程序员提供的、编译器验证的锁顺序注释来处理递归数据结构。在对算法的评估中,考虑的所有10个AJ程序都被证明是无死锁的。一个程序需要4个排序注释,另外两个程序需要轻微的重构。对于剩下的7个程序,不需要任何形式的程序员干预。
{"title":"Detecting deadlock in programs with data-centric synchronization","authors":"Christian Hammer","doi":"10.1109/ICSE.2013.6606578","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606578","url":null,"abstract":"Previously, we developed a data-centric approach to concurrency control in which programmers specify synchronization constraints declaratively, by grouping shared locations into atomic sets. We implemented our ideas in a Java extension called AJ, using Java locks to implement synchronization. We proved that atomicity violations are prevented by construction, and demonstrated that realistic Java programs can be refactored into AJ without significant loss of performance. This paper presents an algorithm for detecting possible deadlock in AJ programs by ordering the locks associated with atomic sets. In our approach, a type-based static analysis is extended to handle recursive data structures by considering programmer-supplied, compiler-verified lock ordering annotations. In an evaluation of the algorithm, all 10 AJ programs under consideration were shown to be deadlock-free. One program needed 4 ordering annotations and 2 others required minor refactorings. For the remaining 7 programs, no programmer intervention of any kind was required.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116276560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Semantic smells and errors in access control models: A case study in PHP 访问控制模型中的语义气味和错误:PHP中的一个案例研究
Pub Date : 2013-05-18 DOI: 10.1109/ICSE.2013.6606670
François Gauthier, E. Merlo
Access control models implement mechanisms to restrict access to sensitive data from unprivileged users. Access controls typically check privileges that capture the semantics of the operations they protect. Semantic smells and errors in access control models stem from privileges that are partially or totally unrelated to the action they protect. This paper presents a novel approach, partly based on static analysis and information retrieval techniques, for the automatic detection of semantic smells and errors in access control models. Investigation of the case study application revealed 31 smells and 2 errors. Errors were reported to developers who quickly confirmed their relevance and took actions to correct them. Based on the obtained results, we also propose three categories of semantic smells and errors to lay the foundations for further research on access control smells in other systems and domains.
访问控制模型实现了限制非特权用户访问敏感数据的机制。访问控制通常检查捕获它们所保护的操作语义的特权。访问控制模型中的语义气味和错误源于与它们所保护的操作部分或完全无关的特权。本文提出了一种新的方法,部分基于静态分析和信息检索技术,用于自动检测访问控制模型中的语义气味和错误。对案例研究应用程序的调查发现了31种气味和2个错误。错误被报告给开发人员,他们很快确认了它们的相关性,并采取措施纠正它们。在此基础上,提出了三类语义气味和错误,为进一步研究其他系统和领域的访问控制气味奠定基础。
{"title":"Semantic smells and errors in access control models: A case study in PHP","authors":"François Gauthier, E. Merlo","doi":"10.1109/ICSE.2013.6606670","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606670","url":null,"abstract":"Access control models implement mechanisms to restrict access to sensitive data from unprivileged users. Access controls typically check privileges that capture the semantics of the operations they protect. Semantic smells and errors in access control models stem from privileges that are partially or totally unrelated to the action they protect. This paper presents a novel approach, partly based on static analysis and information retrieval techniques, for the automatic detection of semantic smells and errors in access control models. Investigation of the case study application revealed 31 smells and 2 errors. Errors were reported to developers who quickly confirmed their relevance and took actions to correct them. Based on the obtained results, we also propose three categories of semantic smells and errors to lay the foundations for further research on access control smells in other systems and domains.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114976667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2013 35th International Conference on Software Engineering (ICSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1