Access policies are hard to express in existing programming languages. However, their accurate expression is a prerequisite for many of today's applications. We propose a new language that uses classes, first-class relationships, and first-class states to express access policies in a more declarative and fine-grained way than existing solutions allow.
{"title":"Declarative access policies based on objects, relationships, and states","authors":"Simin Chen","doi":"10.1145/2384716.2384757","DOIUrl":"https://doi.org/10.1145/2384716.2384757","url":null,"abstract":"Access policies are hard to express in existing programming languages. However, their accurate expression is a prerequisite for many of today's applications. We propose a new language that uses classes, first-class relationships, and first-class states to express access policies in a more declarative and fine-grained way than existing solutions allow.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121685248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ricci, Assaf Marron, Rafael Heitor Bordini, G. Agha
The fundamental turn of software into concurrency and distribution is not only a matter of performance, but also of design and abstraction. It calls for programming paradigms that, compared to current mainstream paradigms, would allow us to more naturally think about, design, develop, execute, debug, and profile systems exhibiting different degrees of concurrency, autonomy, decentralization of control, and physical distribution. The AGERE! workshop is aimed at focusing on and developing the research on programming systems, languages and applications based on actors, agents and any related programming paradigm promoting a decentralized mindset in solving problems and in developing systems to implement such solutions. The workshop is designed to cover both the theory and the practice of design and programming, bringing together researchers working on models, languages and technologies, and practitioners developing real-world systems and applications.
{"title":"AGERE!: programming based on actors, agents, and decentralized control","authors":"A. Ricci, Assaf Marron, Rafael Heitor Bordini, G. Agha","doi":"10.1145/2384716.2384776","DOIUrl":"https://doi.org/10.1145/2384716.2384776","url":null,"abstract":"The fundamental turn of software into concurrency and distribution is not only a matter of performance, but also of design and abstraction. It calls for programming paradigms that, compared to current mainstream paradigms, would allow us to more naturally think about, design, develop, execute, debug, and profile systems exhibiting different degrees of concurrency, autonomy, decentralization of control, and physical distribution. The AGERE! workshop is aimed at focusing on and developing the research on programming systems, languages and applications based on actors, agents and any related programming paradigm promoting a decentralized mindset in solving problems and in developing systems to implement such solutions. The workshop is designed to cover both the theory and the practice of design and programming, bringing together researchers working on models, languages and technologies, and practitioners developing real-world systems and applications.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121844873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a statically-typed language extensibility mechanism called active type-checking and translation (AT&T) that aims toward expressiveness, safety and composability. This mechanism allows users to equip type definitions with type-level functions that control the compilation process directly, at points that are relevant to that type's semantics.
{"title":"Active type-checking and translation","authors":"Cyrus Omar","doi":"10.1145/2384716.2384764","DOIUrl":"https://doi.org/10.1145/2384716.2384764","url":null,"abstract":"We introduce a statically-typed language extensibility mechanism called active type-checking and translation (AT&T) that aims toward expressiveness, safety and composability. This mechanism allows users to equip type definitions with type-level functions that control the compilation process directly, at points that are relevant to that type's semantics.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121445852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing penetration of parallelism into computing, programmers of all stripes need to acquire competencies in con-current programming. This workshop will concentrate on discussing and disseminating resources for gently introducing parallelism into programmers' skill sets. It will provide a venue for the developers and vendors of programming languages to showcase their facilities and training materials. It will seek short "killer" parallel application examples that can be used in academic or training environments. Another focus will be on short modules that can be used in short courses for practicing programmers, or dropped into academic courses dealing with some aspect of programming. Finally, it will provide a forum for showcasing tools for visualizing and/or teaching parallelism in programming.
{"title":"Developing competency in parallelism: techniques for education and training","authors":"Richard A. Brown, E. Gehringer","doi":"10.1145/2384716.2384783","DOIUrl":"https://doi.org/10.1145/2384716.2384783","url":null,"abstract":"With the increasing penetration of parallelism into computing, programmers of all stripes need to acquire competencies in con-current programming. This workshop will concentrate on discussing and disseminating resources for gently introducing parallelism into programmers' skill sets. It will provide a venue for the developers and vendors of programming languages to showcase their facilities and training materials. It will seek short \"killer\" parallel application examples that can be used in academic or training environments. Another focus will be on short modules that can be used in short courses for practicing programmers, or dropped into academic courses dealing with some aspect of programming. Finally, it will provide a forum for showcasing tools for visualizing and/or teaching parallelism in programming.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127502730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been widespread interest in both academia and industry around techniques to help in fault localization. Much of this work leverages static or dynamic code analysis and hence is constrained by the programming language used or presence of test cases. In order to provide more generically applicable techniques, recent work has focused on devising text search based approaches that recommend source files which a developer can modify to fix a bug. Text search may be used for fault localization in either of the following ways. We can search a repository of past bugs with the bug description to find similar bugs and recommend the source files that were modified to fix those bugs. Alternately, we can directly search the code repository to find source files that share words with the bug report text. Few interesting questions come to mind when we consider applying these text-based search techniques in real projects. For example, would searching on past fixed bugs yield better results than searching on code? What is the accuracy one can expect? Would giving preference to code words in the bug report better the search results? In this paper, we apply variants of text-search on four open source projects and compare the impact of different design considerations on search efficacy.
{"title":"Is text search an effective approach for fault localization: a practitioners perspective","authors":"Vibha Sinha, Senthil Mani, Debdoot Mukherjee","doi":"10.1145/2384716.2384770","DOIUrl":"https://doi.org/10.1145/2384716.2384770","url":null,"abstract":"There has been widespread interest in both academia and industry around techniques to help in fault localization. Much of this work leverages static or dynamic code analysis and hence is constrained by the programming language used or presence of test cases. In order to provide more generically applicable techniques, recent work has focused on devising text search based approaches that recommend source files which a developer can modify to fix a bug. Text search may be used for fault localization in either of the following ways. We can search a repository of past bugs with the bug description to find similar bugs and recommend the source files that were modified to fix those bugs. Alternately, we can directly search the code repository to find source files that share words with the bug report text. Few interesting questions come to mind when we consider applying these text-based search techniques in real projects. For example, would searching on past fixed bugs yield better results than searching on code? What is the accuracy one can expect? Would giving preference to code words in the bug report better the search results? In this paper, we apply variants of text-search on four open source projects and compare the impact of different design considerations on search efficacy.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134095643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Researchers interested in security often wish to introduce new primitives into a language. Extensible languages hold promise in such scenarios, but only if the extension mechanism is sufficiently safe and expressive. This paper describes several modifications to an extensible language motivated by end-to-end security concerns.
{"title":"Security through extensible type systems","authors":"Nathan Fulton","doi":"10.1145/2384716.2384761","DOIUrl":"https://doi.org/10.1145/2384716.2384761","url":null,"abstract":"Researchers interested in security often wish to introduce new primitives into a language. Extensible languages hold promise in such scenarios, but only if the extension mechanism is sufficiently safe and expressive. This paper describes several modifications to an extensible language motivated by end-to-end security concerns.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122944928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The high relevance gained by mobile software applications, the large number of users, and the growing development competition, trigger a need for a method to measure and track the quality of mobile software products from a domain-specific, quantitative point of view. We pursue the implementation of a strategy to extend software quality standards to supply mechanisms to measure the quality of mobile software products, for developers to have a well-founded understanding of whether their applications meet the market's demands and user expectations
{"title":"Standard-based strategy to assure the quality of the mobile software product","authors":"Luis Corral","doi":"10.1145/2384716.2384755","DOIUrl":"https://doi.org/10.1145/2384716.2384755","url":null,"abstract":"The high relevance gained by mobile software applications, the large number of users, and the growing development competition, trigger a need for a method to measure and track the quality of mobile software products from a domain-specific, quantitative point of view. We pursue the implementation of a strategy to extend software quality standards to supply mechanisms to measure the quality of mobile software products, for developers to have a well-founded understanding of whether their applications meet the market's demands and user expectations","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127930578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe an empirical study to understand how JavaScript language features are used by the programmers. Our test corpus is larger than any previous work (more than 1 million scripts) and it attempts to target JS usage from various points of view. We describe the usage results of JS language features.
{"title":"JavaScript: the used parts","authors":"S. Gude","doi":"10.1145/2384716.2384762","DOIUrl":"https://doi.org/10.1145/2384716.2384762","url":null,"abstract":"We describe an empirical study to understand how JavaScript language features are used by the programmers. Our test corpus is larger than any previous work (more than 1 million scripts) and it attempts to target JS usage from various points of view. We describe the usage results of JS language features.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121279921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Fraser, K. Cooper, J. Coplien, Ruth G. Lennon, Ramya Ravichandar, D. Spinellis, G. Succi
Tools emerge as the result of necessity - a job needs to be done, automated, and scaled. In the ""early days" - compilers, code management, bug tracking, and the like - resulted in mostly local home-grown tools - and when broadly successful - spawn (from either industry or university origins) independent tools companies - for example Klocwork from Nortel and Coverity from Stanford University. This panel will bring together academics and industry professionals to discuss challenges in tools research.
{"title":"Software tools research: a matter of scale and scope - or commoditization?","authors":"S. Fraser, K. Cooper, J. Coplien, Ruth G. Lennon, Ramya Ravichandar, D. Spinellis, G. Succi","doi":"10.1145/2384716.2384740","DOIUrl":"https://doi.org/10.1145/2384716.2384740","url":null,"abstract":"Tools emerge as the result of necessity - a job needs to be done, automated, and scaled. In the \"\"early days\" - compilers, code management, bug tracking, and the like - resulted in mostly local home-grown tools - and when broadly successful - spawn (from either industry or university origins) independent tools companies - for example Klocwork from Nortel and Coverity from Stanford University. This panel will bring together academics and industry professionals to discuss challenges in tools research.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114788473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is well-known that accurate effort estimation is one of the key factors in deciding the success of a software project. However, as any project manager knows, generating accurate estimates has proven to be extremely difficult in practice. Even well-known estimation techniques such as COCOMO or SLIMare not guaranteed to work all the time. One key issue in estimation is the selection of the appropriate historical project data set as a frame of reference against which the estimation can be generated. In our experience in working with software projects in IBM, we have found this to be the most crucial deciding factor for the success of a software estimate; indeed, choosing the wrong project data set during estimation could be disastrous for the software project in question. This is because the trendlines (charts of effort vis-a-vis size) generated from the historical data determine the estimate for the software project, and wrong trendlines could result in wrong estimates.To that end, in this paper, we present an automated trendline generation technique for improving effort estimation in software projects. Our technique makes use of a novel data structure that we have designed called Estimation Key-Map, which represents project data in a multi-dimensional format. This format enables dynamic analysis and clustering of project data into appropriate subsets that can be selected as historical data for estimation of the software project in question. We present the results of validation of our technique against reported actual data, by evaluating it against a large project data set from IBM; therein, we show how our technique enables the selection of the appropriate trendline, thereby enabling more accurate effort estimates.
{"title":"Automated trendline generation for accurate software effort estimation","authors":"Karthikeyan Ponnalagu, N. Narendra","doi":"10.1145/2384716.2384774","DOIUrl":"https://doi.org/10.1145/2384716.2384774","url":null,"abstract":"It is well-known that accurate effort estimation is one of the key factors in deciding the success of a software project. However, as any project manager knows, generating accurate estimates has proven to be extremely difficult in practice. Even well-known estimation techniques such as COCOMO or SLIMare not guaranteed to work all the time. One key issue in estimation is the selection of the appropriate historical project data set as a frame of reference against which the estimation can be generated. In our experience in working with software projects in IBM, we have found this to be the most crucial deciding factor for the success of a software estimate; indeed, choosing the wrong project data set during estimation could be disastrous for the software project in question. This is because the trendlines (charts of effort vis-a-vis size) generated from the historical data determine the estimate for the software project, and wrong trendlines could result in wrong estimates.To that end, in this paper, we present an automated trendline generation technique for improving effort estimation in software projects. Our technique makes use of a novel data structure that we have designed called Estimation Key-Map, which represents project data in a multi-dimensional format. This format enables dynamic analysis and clustering of project data into appropriate subsets that can be selected as historical data for estimation of the software project in question. We present the results of validation of our technique against reported actual data, by evaluating it against a large project data set from IBM; therein, we show how our technique enables the selection of the appropriate trendline, thereby enabling more accurate effort estimates.","PeriodicalId":194590,"journal":{"name":"ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125177612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}