Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506569
Steven Schmoll, Anith Vishwanath, M. A. Siddiqui, Boppaiah Koothanda Subbaiah, C. Chua
Learning HTML poses similar challenges as learning conventional programming language by novice programmer. Apart from the HTML validator, there are limited tools to help novice programmer address errors in their HTML code. In this study, we employ visualisation techniques to display the structural and contextual information of the HTML code. We look at condensing and visually representing the important aspects of the HTML code. This is to enable novice programmers gain insights on the HTML code structure and locate any underlying syntax and semantic errors.
{"title":"HTML Document Error Detector and Visualiser for Novice Programmers","authors":"Steven Schmoll, Anith Vishwanath, M. A. Siddiqui, Boppaiah Koothanda Subbaiah, C. Chua","doi":"10.1109/VLHCC.2018.8506569","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506569","url":null,"abstract":"Learning HTML poses similar challenges as learning conventional programming language by novice programmer. Apart from the HTML validator, there are limited tools to help novice programmer address errors in their HTML code. In this study, we employ visualisation techniques to display the structural and contextual information of the HTML code. We look at condensing and visually representing the important aspects of the HTML code. This is to enable novice programmers gain insights on the HTML code structure and locate any underlying syntax and semantic errors.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127603054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After Android 6.0 introduces the runtime-permission system, many apps provide runtime-permission-group rationales for the users to better understand the permissions requested by the apps. To understand the patterns of rationales and to what extent the rationales can improve the users' understanding of the purposes of requesting permission groups, we conduct a large-scale measurement study on five aspects of runtime rationales. We have five main findings: (1) less than 25% apps under study provide rationales; (2) for permission-group purposes that are difficult to understand, the proportions of apps that provide rationales are even lower; (3) the purposes stated in a significant proportion of rationales are incorrect; (4) a large proportion of customized rationales do not provide more information than the default permission-requesting message of Android; (5) apps that provide rationales are more likely to explain the same permission group's purposes in their descriptions than apps that do not provide rationales. We further discuss important implications from these findings
{"title":"A Large-Scale Empirical Study on Android Runtime-Permission Rationale Messages","authors":"Xueqing Liu, Yue Leng, Wei Yang, Wenyu Wang, ChengXiang Zhai, Tao Xie","doi":"10.1109/VLHCC.2018.8506574","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506574","url":null,"abstract":"After Android 6.0 introduces the runtime-permission system, many apps provide runtime-permission-group rationales for the users to better understand the permissions requested by the apps. To understand the patterns of rationales and to what extent the rationales can improve the users' understanding of the purposes of requesting permission groups, we conduct a large-scale measurement study on five aspects of runtime rationales. We have five main findings: (1) less than 25% apps under study provide rationales; (2) for permission-group purposes that are difficult to understand, the proportions of apps that provide rationales are even lower; (3) the purposes stated in a significant proportion of rationales are incorrect; (4) a large proportion of customized rationales do not provide more information than the default permission-requesting message of Android; (5) apps that provide rationales are more likely to explain the same permission group's purposes in their descriptions than apps that do not provide rationales. We further discuss important implications from these findings","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"262 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125336661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506553
A. Blackwell, Luke Church, M. Mahmoudi, Mariana Marasoiu
We ask how users interact with ‘knowledge’ in the context of artificial intelligence systems. Four examples of visual interfaces demonstrate the need for such systems to allow room for negotiation between domain experts, automated statistical models, and the people who are involved in collecting and providing data.
{"title":"Visual Knowledge Negotiation","authors":"A. Blackwell, Luke Church, M. Mahmoudi, Mariana Marasoiu","doi":"10.1109/VLHCC.2018.8506553","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506553","url":null,"abstract":"We ask how users interact with ‘knowledge’ in the context of artificial intelligence systems. Four examples of visual interfaces demonstrate the need for such systems to allow room for negotiation between domain experts, automated statistical models, and the people who are involved in collecting and providing data.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126690734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506530
Y. Inayama, H. Hosobe
Block-based visual programming (BVP) is becoming popular as a basis of programming education. It allows beginners to visually construct programs without suffering from syntax errors. However, a typical user interface for BVP is inefficient partly because the users need to perform many drag-and-drop operations to put blocks on a program, and also partly because they need to find necessary blocks from many choices. To improve the efficiency of constructing programs in a BVP system, we propose a user interface that introduces three new features: (1) the semiautomatic addition of blocks; (2) the use of a pie menu to change categories of blocks; (3) the focus+context visualization of blocks in a category. We implemented a prototype BVP system with the new user interface.
{"title":"Toward an Efficient User Interface for Block-Based Visual Programming","authors":"Y. Inayama, H. Hosobe","doi":"10.1109/VLHCC.2018.8506530","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506530","url":null,"abstract":"Block-based visual programming (BVP) is becoming popular as a basis of programming education. It allows beginners to visually construct programs without suffering from syntax errors. However, a typical user interface for BVP is inefficient partly because the users need to perform many drag-and-drop operations to put blocks on a program, and also partly because they need to find necessary blocks from many choices. To improve the efficiency of constructing programs in a BVP system, we propose a user interface that introduces three new features: (1) the semiautomatic addition of blocks; (2) the use of a pie menu to change categories of blocks; (3) the focus+context visualization of blocks in a category. We implemented a prototype BVP system with the new user interface.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"38 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113973492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506573
D. Rough, A. Quigley
Psychology researchers employ the Experience Sampling Method (ESM) to capture thoughts and behaviours of participants within their everyday lives. Smartphone-based ESM apps are increasingly used in such research. However, the diversity of researchers' app requirements, coupled with cost and complexity of their implementation, has prompted end-user development (EUD) approaches. In addition, limited evaluation of such environments beyond lab-based usability studies precludes discovery of factors pertaining to real-world EUD adoption. We first describe the extension of Jeeves, our visual programming environment for ESM app creation, in which we implemented additional functional requirements, derived from a survey and analysis of previous work. We further describe interviews with psychology researchers to understand their practical considerations for employing this extended environment in their work practices. Results of our analysis are presented as factors pertaining to the adoption of EUD activities within and between communities of practice.
{"title":"End-User Development in Social Psychology Research: Factors for Adoption","authors":"D. Rough, A. Quigley","doi":"10.1109/VLHCC.2018.8506573","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506573","url":null,"abstract":"Psychology researchers employ the Experience Sampling Method (ESM) to capture thoughts and behaviours of participants within their everyday lives. Smartphone-based ESM apps are increasingly used in such research. However, the diversity of researchers' app requirements, coupled with cost and complexity of their implementation, has prompted end-user development (EUD) approaches. In addition, limited evaluation of such environments beyond lab-based usability studies precludes discovery of factors pertaining to real-world EUD adoption. We first describe the extension of Jeeves, our visual programming environment for ESM app creation, in which we implemented additional functional requirements, derived from a survey and analysis of previous work. We further describe interviews with psychology researchers to understand their practical considerations for employing this extended environment in their work practices. Results of our analysis are presented as factors pertaining to the adoption of EUD activities within and between communities of practice.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129062438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506491
J. Barbosa, M. Wanderley, Stéphane Huot
Much progress has been made on interactive behavior development tools for expert programmers. However, little effort has been made in investigating how these tools support creative communities who typically struggle with technical development. This is the case, for instance, of media artists and composers working with interactive environments. To address this problem, we introduce ZenStates: a new specification model for creative interactive environments that combines Hierarchical Finite-States Machines, expressions, off-the-shelf components called Tasks, and a global communication system called the Blackboard. Our evaluation is three-folded: (a) implementing our model in a direct manipulation-based software interface; (b) probing ZenStates' expressive power through 90 exploratory scenarios; and (c) performing a user study to investigate the understandability of ZenStates' model. Results support ZenStates viability, its expressivity, and suggest that ZenStates is easier to understand-in terms of decision time and decision accuracy-compared to two popular alternatives.
{"title":"ZenStates: Easy-to-Understand Yet Expressive Specifications for Creative Interactive Environments","authors":"J. Barbosa, M. Wanderley, Stéphane Huot","doi":"10.1109/VLHCC.2018.8506491","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506491","url":null,"abstract":"Much progress has been made on interactive behavior development tools for expert programmers. However, little effort has been made in investigating how these tools support creative communities who typically struggle with technical development. This is the case, for instance, of media artists and composers working with interactive environments. To address this problem, we introduce ZenStates: a new specification model for creative interactive environments that combines Hierarchical Finite-States Machines, expressions, off-the-shelf components called Tasks, and a global communication system called the Blackboard. Our evaluation is three-folded: (a) implementing our model in a direct manipulation-based software interface; (b) probing ZenStates' expressive power through 90 exploratory scenarios; and (c) performing a user study to investigate the understandability of ZenStates' model. Results support ZenStates viability, its expressivity, and suggest that ZenStates is easier to understand-in terms of decision time and decision accuracy-compared to two popular alternatives.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129085278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506582
Mariana Cabeda, Pedro Santos
This showpiece presents a tool that aids OutSystems developers in the task of generating test suites for their applications in an efficient and effective manner. The OutSystems language is a visual language graphically represented through a graph that this tool will traverse in order to generate test cases. The tool is able to generate and present to the developer, in an automated manner, the various input combinations needed to reach maximum code coverage, offering a coverage evaluation according to a set of coverage criteria: node, branch, condition, modified condition-decision and multiple condition coverage.
{"title":"Automated Test Generation Based on a Visual Language Applicational Model","authors":"Mariana Cabeda, Pedro Santos","doi":"10.1109/VLHCC.2018.8506582","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506582","url":null,"abstract":"This showpiece presents a tool that aids OutSystems developers in the task of generating test suites for their applications in an efficient and effective manner. The OutSystems language is a visual language graphically represented through a graph that this tool will traverse in order to generate test cases. The tool is able to generate and present to the developer, in an automated manner, the various input combinations needed to reach maximum code coverage, offering a coverage evaluation according to a set of coverage criteria: node, branch, condition, modified condition-decision and multiple condition coverage.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127917584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506541
Yerika Jimenez
In recent years, the US has begun scaling up efforts to increase access to CS in K-12 classrooms and many teachers are turning to block-based programming environments to minimize the syntax and conceptual challenges students encounter in text-based languages. Block-based programming environments, such as Scratch and App Inventor, are currently being used by millions of students in and outside of classroom. We know that when novice programmers are learning to program in block-based programming environments, they need to understand the components of these environments, how to apply programming concepts, and how to create artifacts. However, we still do not know how are students' learning these components or what learning challenges they face that hinder their future participation in CS. In addition, the mental effort/cognitive workload students bear while learning programming constructs is still an open question. The goal of my dissertation research is to leverage advances in Electroencephalography (EEG) research to explore how students learn CS concepts, write programs, and complete programming tasks in block-based and hybrid programming environments and understand the relationship between cognitive load and their learning.
{"title":"Using Electroencephalography (EEG) to Understand and Compare Students' Mental Effort as they Learn to Program Using Block-Based and Hybrid Programming Environments","authors":"Yerika Jimenez","doi":"10.1109/VLHCC.2018.8506541","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506541","url":null,"abstract":"In recent years, the US has begun scaling up efforts to increase access to CS in K-12 classrooms and many teachers are turning to block-based programming environments to minimize the syntax and conceptual challenges students encounter in text-based languages. Block-based programming environments, such as Scratch and App Inventor, are currently being used by millions of students in and outside of classroom. We know that when novice programmers are learning to program in block-based programming environments, they need to understand the components of these environments, how to apply programming concepts, and how to create artifacts. However, we still do not know how are students' learning these components or what learning challenges they face that hinder their future participation in CS. In addition, the mental effort/cognitive workload students bear while learning programming constructs is still an open question. The goal of my dissertation research is to leverage advances in Electroencephalography (EEG) research to explore how students learn CS concepts, write programs, and complete programming tasks in block-based and hybrid programming environments and understand the relationship between cognitive load and their learning.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116010453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506512
Ankica Barisic, Csaba Debreceni, Dániel Varró, Vasco Amaral, M. Goulão
Model-driven engineering relies on effective collaboration between different teams which introduces complex model management challenges. DSE Merge aims to efficiently merge model versions created by various collaborators using search-based exploration of solution candidates that represent conflict-free merged models guided by domain-specific knowledge. In this paper, we report how we systematically evaluated the efficiency of the DSE Merge technique from the user point of view using a reactive experimental Software engineering approach. The empirical tests included the involvement of the intended end users (i.e. engineers), namely undergraduate students, which were expected to confirm the impact of design decisions. In particular, we asked users to merge the different versions of the same model using DSE Merge when compared to using Diff Merge. The experiment showed that to use DSE Merge participant required lower cognitive effort, and expressed their preference and satisfaction with it.
{"title":"Evaluating the efficiency of using a search-based automated model merge technique","authors":"Ankica Barisic, Csaba Debreceni, Dániel Varró, Vasco Amaral, M. Goulão","doi":"10.1109/VLHCC.2018.8506512","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506512","url":null,"abstract":"Model-driven engineering relies on effective collaboration between different teams which introduces complex model management challenges. DSE Merge aims to efficiently merge model versions created by various collaborators using search-based exploration of solution candidates that represent conflict-free merged models guided by domain-specific knowledge. In this paper, we report how we systematically evaluated the efficiency of the DSE Merge technique from the user point of view using a reactive experimental Software engineering approach. The empirical tests included the involvement of the intended end users (i.e. engineers), namely undergraduate students, which were expected to confirm the impact of design decisions. In particular, we asked users to merge the different versions of the same model using DSE Merge when compared to using Diff Merge. The experiment showed that to use DSE Merge participant required lower cognitive effort, and expressed their preference and satisfaction with it.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128023872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/VLHCC.2018.8506502
C. M. D. Morais, J. Kelner, D. Sadok, Theo Lynn
The Internet of Things (IoT) has emerged as one of the prominent concepts in academic discourse in recent times reflecting a wider trend by industry to connect physical objects to the Internet and to each other. The IoT is already generating an unprecedented volume of data in greater varieties and higher velocities. Making sense of such data is an emerging and significant challenge. Infographics are visual representations that provide a visual space for end users to compare and analyze data, information, and knowledge in a more efficient form than traditional forms. The nature of loT requires a continuum modification in how end users see information to achieve such efficiency gains. Conceptualizing and implementing Infographics in an loT system can thus require significant planning and development for both data scientists, graphic designers and developers resulting in both costs in terms of time and effort. To address this problem, this paper presents SiMoNa, a domain-specific modeling language (DSML) to create, connect, interact, and build interactive infographic presentations for loT systems efficiently based on the model-driven develonment (MDD) naradiam.
{"title":"SiMoNa: A Proof-of-concept Domain Specific Modeling Language for IoT Infographics","authors":"C. M. D. Morais, J. Kelner, D. Sadok, Theo Lynn","doi":"10.1109/VLHCC.2018.8506502","DOIUrl":"https://doi.org/10.1109/VLHCC.2018.8506502","url":null,"abstract":"The Internet of Things (IoT) has emerged as one of the prominent concepts in academic discourse in recent times reflecting a wider trend by industry to connect physical objects to the Internet and to each other. The IoT is already generating an unprecedented volume of data in greater varieties and higher velocities. Making sense of such data is an emerging and significant challenge. Infographics are visual representations that provide a visual space for end users to compare and analyze data, information, and knowledge in a more efficient form than traditional forms. The nature of loT requires a continuum modification in how end users see information to achieve such efficiency gains. Conceptualizing and implementing Infographics in an loT system can thus require significant planning and development for both data scientists, graphic designers and developers resulting in both costs in terms of time and effort. To address this problem, this paper presents SiMoNa, a domain-specific modeling language (DSML) to create, connect, interact, and build interactive infographic presentations for loT systems efficiently based on the model-driven develonment (MDD) naradiam.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132430611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}