Dynamic Symbolic Execution (DSE) is among the most effective techniques for structural test generation, i.e., test generation to achieve high structural coverage. Despite its recent success, DSE still suffers from various problems such as the boundary problem when applied on various programs in practice. To assist problem diagnosis for structural test generation, in this paper, we propose a visualization approach named PexViz. Our approach helps the tool users better understand and diagnose the encountered problems by reducing the large search space for problem root causes by aggregating information gathered through DSE exploration.
{"title":"Visualizing Path Exploration to Assist Problem Diagnosis for Structural Test Generation","authors":"Jiayi Cao, Angello Astorga, Siwakorn Srisakaokul, Zhengkai Wu, Xueqing Liu, Xusheng Xiao, Tao Xie","doi":"10.1109/VLHCC.2018.8506484","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506484","url":null,"abstract":"Dynamic Symbolic Execution (DSE) is among the most effective techniques for structural test generation, i.e., test generation to achieve high structural coverage. Despite its recent success, DSE still suffers from various problems such as the boundary problem when applied on various programs in practice. To assist problem diagnosis for structural test generation, in this paper, we propose a visualization approach named PexViz. Our approach helps the tool users better understand and diagnose the encountered problems by reducing the large search space for problem root causes by aggregating information gathered through DSE exploration.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122780127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506506
Toby Jia-Jun Li, I. Labutov, Xiaohan Nancy Li, Xiaoyi Zhang, Wenze Shi, Wanling Ding, Tom Michael Mitchell, B. Myers
A key challenge for generalizing programming-by-demonstration (PBD) scripts is the data description problem - when a user demonstrates performing an action, the system needs to determine features for describing this action and the target object in a way that can reflect the user's intention for the action. However, prior approaches for creating data descriptions in PBD systems have problems with usability, applicability, feasibility, transparency and/or user control. Our APPINITE system introduces a multimodal interface with which users can specify data descriptions verbally using natural language instructions. APPINITE guides users to describe their intentions for the demonstrated actions through mixed-initiative conversations. APPINITE constructs data descriptions for these actions from the natural language instructions. Our evaluation showed that APPINITE is easy-to-use and effective in creating scripts for tasks that would otherwise be difficult to create with prior PBD systems, due to ambiguous data descriptions in demonstrations on GUIs.
{"title":"APPINITE: A Multi-Modal Interface for Specifying Data Descriptions in Programming by Demonstration Using Natural Language Instructions","authors":"Toby Jia-Jun Li, I. Labutov, Xiaohan Nancy Li, Xiaoyi Zhang, Wenze Shi, Wanling Ding, Tom Michael Mitchell, B. Myers","doi":"10.1109/VLHCC.2018.8506506","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506506","url":null,"abstract":"A key challenge for generalizing programming-by-demonstration (PBD) scripts is the data description problem - when a user demonstrates performing an action, the system needs to determine features for describing this action and the target object in a way that can reflect the user's intention for the action. However, prior approaches for creating data descriptions in PBD systems have problems with usability, applicability, feasibility, transparency and/or user control. Our APPINITE system introduces a multimodal interface with which users can specify data descriptions verbally using natural language instructions. APPINITE guides users to describe their intentions for the demonstrated actions through mixed-initiative conversations. APPINITE constructs data descriptions for these actions from the natural language instructions. Our evaluation showed that APPINITE is easy-to-use and effective in creating scripts for tasks that would otherwise be difficult to create with prior PBD systems, due to ambiguous data descriptions in demonstrations on GUIs.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131217134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506525
Justin Smith
Static analysis tools detect potentially costly security defects early in the software development process. However, these defects can be difficult for developers to accurately and efficiently resolve. The goal of this work is to understand the vulnerability resolution process so that we can build tools that support more effective strategies for resolving vulnerabilities. In this work, I study developers as they resolve security vulnerabilities to identify their information needs and current strategies. Next, I study existing tools to understand how they support developers' strategies. Finally, I plan to demonstrate how strategy-aware tools can help developers resolve security vulnerabilities more accurately and efficiently.
{"title":"Supporting Effective Strategies for Resolving Vulnerabilities Reported by Static Analysis Tools","authors":"Justin Smith","doi":"10.1109/VLHCC.2018.8506525","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506525","url":null,"abstract":"Static analysis tools detect potentially costly security defects early in the software development process. However, these defects can be difficult for developers to accurately and efficiently resolve. The goal of this work is to understand the vulnerability resolution process so that we can build tools that support more effective strategies for resolving vulnerabilities. In this work, I study developers as they resolve security vulnerabilities to identify their information needs and current strategies. Next, I study existing tools to understand how they support developers' strategies. Finally, I plan to demonstrate how strategy-aware tools can help developers resolve security vulnerabilities more accurately and efficiently.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116638262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506544
C. M. D. Morais, P. Endo, Sergej Svorobej, Theo Lynn
The RECAP is a European Union funded project that seeks to develop a next-generation resource management solution, from both technical and business perspectives, when adopting technological solutions spanning across cloud, fog, and edge layers. The RECAP project is composed of a set of use cases that present highly complex and scenario-specific requirements that should be modelled and simulated in order to find optimal solutions for resource management. Due use cases characteristics, configuring simulation scenarios is a high time consuming task and requires staff with specialist expertise.
{"title":"A Modelling Language for Defining Cloud Simulation Scenarios in RECAP Project Context","authors":"C. M. D. Morais, P. Endo, Sergej Svorobej, Theo Lynn","doi":"10.1109/VLHCC.2018.8506544","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506544","url":null,"abstract":"The RECAP is a European Union funded project that seeks to develop a next-generation resource management solution, from both technical and business perspectives, when adopting technological solutions spanning across cloud, fog, and edge layers. The RECAP project is composed of a set of use cases that present highly complex and scenario-specific requirements that should be modelled and simulated in order to find optimal solutions for resource management. Due use cases characteristics, configuring simulation scenarios is a high time consuming task and requires staff with specialist expertise.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131440368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506562
Jun Kato, Masataka Goto
Interactive web pages for learning programming languages and application programming interfaces (APIs), called “playgrounds,” allow programmers to run and edit example codes in place. Despite the benefits of this live programming experience, programmers need to leave the playground at some point and restart the development from scratch in their own programming environments. This paper proposes “DeployGround,” a framework for creating web-based tutorials that streamlines learning APIs on playgrounds and developing and deploying applications. As a case study, we created a web-based tutorial for browser-based and Node.js-based JavaScript APIs. A preliminary user study found appreciation of the streamlined and social workflow of the DeplovGround framework.
{"title":"DeployGround: A Framework for Streamlined Programming from API playgrounds to Application Deployment","authors":"Jun Kato, Masataka Goto","doi":"10.1109/VLHCC.2018.8506562","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506562","url":null,"abstract":"Interactive web pages for learning programming languages and application programming interfaces (APIs), called “playgrounds,” allow programmers to run and edit example codes in place. Despite the benefits of this live programming experience, programmers need to leave the playground at some point and restart the development from scratch in their own programming environments. This paper proposes “DeployGround,” a framework for creating web-based tutorials that streamlines learning APIs on playgrounds and developing and deploying applications. As a case study, we created a web-based tutorial for browser-based and Node.js-based JavaScript APIs. A preliminary user study found appreciation of the streamlined and social workflow of the DeplovGround framework.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132457126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506576
Mary Beth Kery, B. Myers
Experimentation through code is central to data scientists' work. Prior work has identified the need for interaction techniques for quickly exploring multiple versions of the code and the associated outputs. Yet previous approaches that provide history information have been challenging to scale: real use produces a high number of versions of different code and non-code artifacts with dependency relationships and a convoluted mix of different analysis intents. Prior work has found that navigating these records to pick out the relevant information for a given task is difficult and time consuming. We introduce Verdant, a new system with a novel versioning model to support fast retrieval and sensemaking of messy version data. Verdant provides light-weight interactions for comparing, replaying, and tracing relationships among many versions of different code and non-code artifacts in the editor. We implemented Verdant into Jupyter Notebooks, and validated the usability of Verdant's interactions through a usability study.
{"title":"Interactions for Untangling Messy History in a Computational Notebook","authors":"Mary Beth Kery, B. Myers","doi":"10.1109/VLHCC.2018.8506576","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506576","url":null,"abstract":"Experimentation through code is central to data scientists' work. Prior work has identified the need for interaction techniques for quickly exploring multiple versions of the code and the associated outputs. Yet previous approaches that provide history information have been challenging to scale: real use produces a high number of versions of different code and non-code artifacts with dependency relationships and a convoluted mix of different analysis intents. Prior work has found that navigating these records to pick out the relevant information for a given task is difficult and time consuming. We introduce Verdant, a new system with a novel versioning model to support fast retrieval and sensemaking of messy version data. Verdant provides light-weight interactions for comparing, replaying, and tracing relationships among many versions of different code and non-code artifacts in the editor. We implemented Verdant into Jupyter Notebooks, and validated the usability of Verdant's interactions through a usability study.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129897047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506534
Aaron Pang, C. Anslow, J. Noble
We know very little about why developers do what they do. Lab studies are all very well, but often their results (e.g. that static type systems make development faster) seem contradicted by practice (e.g. developers choosing JavaScript or Python rather than Java or C#). In this paper we build a first cut of a theory of why developers do what they do with a focus on the domain of static versus dynamic programming languages. We used a qualitative research method - Grounded Theory, to interview a number of developers $pmb{(mathrm{n}=15)}$ about their experience using static and dynamic languages, and constructed a Grounded Theory of their programming language choices.
{"title":"What Programming Languages Do Developers Use? A Theory of Static vs Dynamic Language Choice","authors":"Aaron Pang, C. Anslow, J. Noble","doi":"10.1109/VLHCC.2018.8506534","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506534","url":null,"abstract":"We know very little about why developers do what they do. Lab studies are all very well, but often their results (e.g. that static type systems make development faster) seem contradicted by practice (e.g. developers choosing JavaScript or Python rather than Java or C#). In this paper we build a first cut of a theory of why developers do what they do with a focus on the domain of static versus dynamic programming languages. We used a qualitative research method - Grounded Theory, to interview a number of developers $pmb{(mathrm{n}=15)}$ about their experience using static and dynamic languages, and constructed a Grounded Theory of their programming language choices.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132880538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506579
Paolo Bottoni, T. Castrignanò, Tiziano Flati, Francesco Maggi
With technologies for massively parallel genome sequencing available, bioinformatics has entered the “big data” era. Developing applications in this field involves collaboration of domain experts with IT specialists to specify programs able to query several sources, obtain data in several formats, search them for significant patterns and present the obtained results according to several types of visualisation. Based on the experience gained in developing several Web portals for accessing and querying genomics and proteomics databases, we have derived a meta-model of such portals and implemented BioWebEngine, a generation environment where a user is assisted in specifying and deploying the intended portal according to the meta-model.
{"title":"BioWebEngine: A generation environment for bioinformatics research","authors":"Paolo Bottoni, T. Castrignanò, Tiziano Flati, Francesco Maggi","doi":"10.1109/VLHCC.2018.8506579","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506579","url":null,"abstract":"With technologies for massively parallel genome sequencing available, bioinformatics has entered the “big data” era. Developing applications in this field involves collaboration of domain experts with IT specialists to specify programs able to query several sources, obtain data in several formats, search them for significant patterns and present the obtained results according to several types of visualisation. Based on the experience gained in developing several Web portals for accessing and querying genomics and proteomics databases, we have derived a meta-model of such portals and implemented BioWebEngine, a generation environment where a user is assisted in specifying and deploying the intended portal according to the meta-model.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114200703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506492
Marissa Radensky, Toby Jia-Jun Li, B. Myers
Though conditionals are an integral component of programming, providing an easy means of creating conditionals remains a challenge for programming-by-demonstration (PBD) systems for task automation. We hypothesize that a promising method for implementing conditionals in such systems is to incorporate the use of verbal instructions. Verbal instructions supplied concurrently with demonstrations have been shown to improve the generalizability of PBD. However, the challenge of supporting conditional creation using this multi-modal approach has not been addressed. In this extended abstract, we present our study on understanding how end users describe conditionals in natural language for mobile app tasks. We conducted a formative study of 56 participants asking them to verbally describe conditionals in different settings for 9 sample tasks and to invent conditional tasks. Participant responses were analyzed using open coding and revealed that, in the context of mobile apps, end users often omit desired else statements when explaining conditionals, sometimes use ambiguous concepts in expressing conditionals, and often desire to implement complex conditionals. Based on these findings, we discuss the implications for designing a multimodal PBD interface to support the creation of conditionals.
{"title":"How End Users Express Conditionals in Programming by Demonstration for Mobile Apps","authors":"Marissa Radensky, Toby Jia-Jun Li, B. Myers","doi":"10.1109/VLHCC.2018.8506492","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506492","url":null,"abstract":"Though conditionals are an integral component of programming, providing an easy means of creating conditionals remains a challenge for programming-by-demonstration (PBD) systems for task automation. We hypothesize that a promising method for implementing conditionals in such systems is to incorporate the use of verbal instructions. Verbal instructions supplied concurrently with demonstrations have been shown to improve the generalizability of PBD. However, the challenge of supporting conditional creation using this multi-modal approach has not been addressed. In this extended abstract, we present our study on understanding how end users describe conditionals in natural language for mobile app tasks. We conducted a formative study of 56 participants asking them to verbally describe conditionals in different settings for 9 sample tasks and to invent conditional tasks. Participant responses were analyzed using open coding and revealed that, in the context of mobile apps, end users often omit desired else statements when explaining conditionals, sometimes use ambiguous concepts in expressing conditionals, and often desire to implement complex conditionals. Based on these findings, we discuss the implications for designing a multimodal PBD interface to support the creation of conditionals.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114218697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506523
Lauren Murphy, Mary Beth Kery, Oluwatosin Alliyu, A. Macvean, B. Myers
Application Programming Interfaces (APIs) are a rapidly growing industry and the usability of the APIs is crucial to programmer productivity. Although prior research has shown that APIs commonly suffer from significant usability problems, little attention has been given to studying how APIs are designed and created in the first place. We interviewed 24 professionals involved with API design from 7 major companies to identify their training and design processes. Interviewees had insights into many different aspects of designing for API usability and areas of significant struggle. For example, they learned to do API design on the job, and had little training for it in school. During the design phase they found it challenging to discern which potential use cases of the API users will value most. After an API is released, designers lack tools to gather aggregate feedback from this data even as developers openly discuss the API online.
{"title":"API Designers in the Field: Design Practices and Challenges for Creating Usable APIs","authors":"Lauren Murphy, Mary Beth Kery, Oluwatosin Alliyu, A. Macvean, B. Myers","doi":"10.1109/VLHCC.2018.8506523","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506523","url":null,"abstract":"Application Programming Interfaces (APIs) are a rapidly growing industry and the usability of the APIs is crucial to programmer productivity. Although prior research has shown that APIs commonly suffer from significant usability problems, little attention has been given to studying how APIs are designed and created in the first place. We interviewed 24 professionals involved with API design from 7 major companies to identify their training and design processes. Interviewees had insights into many different aspects of designing for API usability and areas of significant struggle. For example, they learned to do API design on the job, and had little training for it in school. During the design phase they found it challenging to discern which potential use cases of the API users will value most. After an API is released, designers lack tools to gather aggregate feedback from this data even as developers openly discuss the API online.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124762293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}