G. Costagliola, Mattia De Rosa, V. Fuccella, Mark Minas
In this paper, we present ParVis, an interactive visual system for the animated visualization of logged parser trace executions. The system allows a parser implementer to create a visualizer for generated parsers by simply defining a JavaScript module that maps each logged parser instruction into a set of events driving the visual system interface. The result is a set of interacting graphical/text windows that allows users to explore logged parser executions and helps them to have a complete understanding of how the parser behaves during its execution on a given input. We used our system to visualize the behavior of textual as well as visual parsers and describe here its use with the well known CUP parser generator. Preliminary tests with users have provided good feedback on its use.
{"title":"ParVis","authors":"G. Costagliola, Mattia De Rosa, V. Fuccella, Mark Minas","doi":"10.1145/3399715.3399853","DOIUrl":"https://doi.org/10.1145/3399715.3399853","url":null,"abstract":"In this paper, we present ParVis, an interactive visual system for the animated visualization of logged parser trace executions. The system allows a parser implementer to create a visualizer for generated parsers by simply defining a JavaScript module that maps each logged parser instruction into a set of events driving the visual system interface. The result is a set of interacting graphical/text windows that allows users to explore logged parser executions and helps them to have a complete understanding of how the parser behaves during its execution on a given input. We used our system to visualize the behavior of textual as well as visual parsers and describe here its use with the well known CUP parser generator. Preliminary tests with users have provided good feedback on its use.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116550279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present and assess a novel technique for unitizing inspired by a cognitive theory on event structure perception. Unitizing indicates the process of dividing an observation into smaller units. Unitizing is often performed automatically, e.g., by selecting fixed-length windows. Although fast, such approach might result in unit boundaries being placed mid-interaction, eventually affecting observation, annotation, and labeling. We conceived a unitizing technique based on the Event Segmentation theory. In brief, changes drive the perception of boundaries between events (or units): an unexpected change in the observed situation might mean the current event ended and a new one begun. Our technique relies on observed changes for identifying unit boundaries. The first sketch of our technique was recently tested, proving it effective in overcoming the aforementioned shortcomings of fixed-window unitizing. Here, we further explore its feasibility by testing it in a different domain, i.e., solo stage performances, in order to explore the feasibility of adopting our unitizing approach across domains. Our results further support the idea of leveraging the Event Segmentation Theory for the design of an automatic technique for video unitizing.
{"title":"Towards a cognitive-inspired automatic unitizing technique: a feasibility study","authors":"Eleonora Ceccaldi, G. Volpe","doi":"10.1145/3399715.3399825","DOIUrl":"https://doi.org/10.1145/3399715.3399825","url":null,"abstract":"In this paper, we present and assess a novel technique for unitizing inspired by a cognitive theory on event structure perception. Unitizing indicates the process of dividing an observation into smaller units. Unitizing is often performed automatically, e.g., by selecting fixed-length windows. Although fast, such approach might result in unit boundaries being placed mid-interaction, eventually affecting observation, annotation, and labeling. We conceived a unitizing technique based on the Event Segmentation theory. In brief, changes drive the perception of boundaries between events (or units): an unexpected change in the observed situation might mean the current event ended and a new one begun. Our technique relies on observed changes for identifying unit boundaries. The first sketch of our technique was recently tested, proving it effective in overcoming the aforementioned shortcomings of fixed-window unitizing. Here, we further explore its feasibility by testing it in a different domain, i.e., solo stage performances, in order to explore the feasibility of adopting our unitizing approach across domains. Our results further support the idea of leveraging the Event Segmentation Theory for the design of an automatic technique for video unitizing.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. T. Baldassarre, Vita Santa Barletta, D. Caivano, A. Piccinno
Nowadays, the dimension and complexity of software development projects increase the possibility of cyber-attacks, information exfiltration and data breaches. In this context, developers play a primary role in addressing privacy requirements and, consequently security, in software applications. Currently, only general guidelines exist that are difficult to put in operation due to the lack of the required security skills and knowledge, and to the use of legacy software development processes that do not deal with privacy and security aspects. This paper presents a knowledge base, the Privacy Knowledge Base (PKB), and the VIS-PRISE prototype (Visually Inspection to Support Privacy and Security) a visual tool that support developers' decisions to integrate privacy and security requirements in all software development phases. An initial experimental study with junior developers is also presented.
{"title":"A Visual Tool for Supporting Decision-Making in Privacy Oriented Software Development","authors":"M. T. Baldassarre, Vita Santa Barletta, D. Caivano, A. Piccinno","doi":"10.1145/3399715.3399818","DOIUrl":"https://doi.org/10.1145/3399715.3399818","url":null,"abstract":"Nowadays, the dimension and complexity of software development projects increase the possibility of cyber-attacks, information exfiltration and data breaches. In this context, developers play a primary role in addressing privacy requirements and, consequently security, in software applications. Currently, only general guidelines exist that are difficult to put in operation due to the lack of the required security skills and knowledge, and to the use of legacy software development processes that do not deal with privacy and security aspects. This paper presents a knowledge base, the Privacy Knowledge Base (PKB), and the VIS-PRISE prototype (Visually Inspection to Support Privacy and Security) a visual tool that support developers' decisions to integrate privacy and security requirements in all software development phases. An initial experimental study with junior developers is also presented.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125975524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a tool aimed at supporting deep qualitative analysis of digital comics. The tool exploits language-based technologies to facilitate the exploration of relatively large sets of comics. The core idea is that the specific words used in the comics are both an important element of the analysis and an index to navigate and explore the dataset. The design concept has been validated in a pilot study and the findings provide evidence that the approach meets the needs of qualitative analysts with the potential of improving their practices.
{"title":"A Language-based Interface for Analysis of Digital Storytelling","authors":"A. Gloder, L. Ducceschi, M. Zancanaro","doi":"10.1145/3399715.3399859","DOIUrl":"https://doi.org/10.1145/3399715.3399859","url":null,"abstract":"In this paper, we introduce a tool aimed at supporting deep qualitative analysis of digital comics. The tool exploits language-based technologies to facilitate the exploration of relatively large sets of comics. The core idea is that the specific words used in the comics are both an important element of the analysis and an index to navigate and explore the dataset. The design concept has been validated in a pilot study and the findings provide evidence that the approach meets the needs of qualitative analysts with the potential of improving their practices.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127781814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of verifying linear model assumptions and remedying associated violations is complex, even when dealing with simple linear regression. This process is not well supported by current tools and remains time-consuming, tedious, and error-prone. We present RegLine, a visual analytics tool supporting the iterative process of assumption verification and violation remedy for simple linear regression models. To identify the best possible model, RegLine helps novices perform data transformations, deal with extreme data points, analyze residuals, validate models by their assumptions, and compare and relate models visually. A qualitative user study indicates that these features of RegLine support the exploratory and refinement process of model building, even for those with little statistical expertise. These findings may guide visualization designs on how interactive visualizations can facilitate refining and validating more complex models.
{"title":"RegLine","authors":"Xiaoyi Wang, L. Micallef, K. Hornbæk","doi":"10.1145/3399715.3399913","DOIUrl":"https://doi.org/10.1145/3399715.3399913","url":null,"abstract":"The process of verifying linear model assumptions and remedying associated violations is complex, even when dealing with simple linear regression. This process is not well supported by current tools and remains time-consuming, tedious, and error-prone. We present RegLine, a visual analytics tool supporting the iterative process of assumption verification and violation remedy for simple linear regression models. To identify the best possible model, RegLine helps novices perform data transformations, deal with extreme data points, analyze residuals, validate models by their assumptions, and compare and relate models visually. A qualitative user study indicates that these features of RegLine support the exploratory and refinement process of model building, even for those with little statistical expertise. These findings may guide visualization designs on how interactive visualizations can facilitate refining and validating more complex models.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130641937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automation, as a design goal, focusses mainly on the migration of tasks from a human operator to a mechanical or digital system. Designing automation thus usually consists in removing tasks or activities from that operator and in designing systems that will be able to perform them. When these automations are not adequately designed (or correctly understood by the operator), they may result in so called automation surprises [1], [2] that degrade, instead of enhance, the overall performance of the couple (operator, system). Usually, these tasks are considered at a high level of abstraction (related to work and work objectives) leaving unconsidered low-level, repetitive tasks. This paper proposes a decomposition of automation for interactive systems highlighting the diverse objectives it may target at. Beyond, multiple complementary views of automation for interactive systems design are presented to better define the multiform concept of automation. It provides numerous concrete examples illustrating each view and identifies ten rules for designing interactive systems embedding automations.
{"title":"Ten Objectives and Ten Rules for Designing Automations in Interaction Techniques, User Interfaces and Interactive Systems","authors":"Philippe A. Palanque","doi":"10.1145/3399715.3400872","DOIUrl":"https://doi.org/10.1145/3399715.3400872","url":null,"abstract":"Automation, as a design goal, focusses mainly on the migration of tasks from a human operator to a mechanical or digital system. Designing automation thus usually consists in removing tasks or activities from that operator and in designing systems that will be able to perform them. When these automations are not adequately designed (or correctly understood by the operator), they may result in so called automation surprises [1], [2] that degrade, instead of enhance, the overall performance of the couple (operator, system). Usually, these tasks are considered at a high level of abstraction (related to work and work objectives) leaving unconsidered low-level, repetitive tasks. This paper proposes a decomposition of automation for interactive systems highlighting the diverse objectives it may target at. Beyond, multiple complementary views of automation for interactive systems design are presented to better define the multiform concept of automation. It provides numerous concrete examples illustrating each view and identifies ten rules for designing interactive systems embedding automations.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133365737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvio Barra, A. Carcangiu, S. Carta, Alessandro Sebastian Podda, Daniele Riboni
Manual event tagging may be a very long and stressful activity, due the monotonous operations involved. This is particularly true when dealing with online video tagging, as for football matches, in which the burden of events to tag can consist of many thousands of actions, according to the desired level of granularity. In this work we describe an actual solution, developed for an existing football match tagging application, in which the GUI has been enhanced and integrated with a Voice User Interface, aiming at reducing tagging time and error rate. Empirical tests have revealed the efficiency and the benefits brought by the developed solution.
{"title":"A Voice User Interface for football event tagging applications","authors":"Silvio Barra, A. Carcangiu, S. Carta, Alessandro Sebastian Podda, Daniele Riboni","doi":"10.1145/3399715.3399967","DOIUrl":"https://doi.org/10.1145/3399715.3399967","url":null,"abstract":"Manual event tagging may be a very long and stressful activity, due the monotonous operations involved. This is particularly true when dealing with online video tagging, as for football matches, in which the burden of events to tag can consist of many thousands of actions, according to the desired level of granularity. In this work we describe an actual solution, developed for an existing football match tagging application, in which the GUI has been enhanced and integrated with a Voice User Interface, aiming at reducing tagging time and error rate. Empirical tests have revealed the efficiency and the benefits brought by the developed solution.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128958700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvestro V. Veneruso, Lauren S. Ferro, Andrea Marrella, Massimo Mecella, Tiziana Catarci
The use of videogames has become an established tool to educate users about various topics. Videogames can promote challenges, co-operation, engagement, motivation, and the development of problem-solving strategies, which are all aspects with an important educational potential. In this paper, we present the design and realization of CyberVR, a Virtual Reality (VR) videogame that acts as an interactive learning experience to improve the user awareness of cybersecurity-related issues. We report the results of a user study showing that CyberVR is equally effective but more engaging as learning method toward cybersecurity education than traditional textbook learning.
{"title":"CyberVR","authors":"Silvestro V. Veneruso, Lauren S. Ferro, Andrea Marrella, Massimo Mecella, Tiziana Catarci","doi":"10.1145/3399715.3399860","DOIUrl":"https://doi.org/10.1145/3399715.3399860","url":null,"abstract":"The use of videogames has become an established tool to educate users about various topics. Videogames can promote challenges, co-operation, engagement, motivation, and the development of problem-solving strategies, which are all aspects with an important educational potential. In this paper, we present the design and realization of CyberVR, a Virtual Reality (VR) videogame that acts as an interactive learning experience to improve the user awareness of cybersecurity-related issues. We report the results of a user study showing that CyberVR is equally effective but more engaging as learning method toward cybersecurity education than traditional textbook learning.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129371544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper makes a point of current perspectives on Data Visualization research that were essentially conceived to provide guidelines for finding the best mapping between data and visual representations. Going back to foundational concepts of HCI that rely on manipulation of visual symbols, we propose a new perspective, with the aim to focus on a different configuration, that considers visual signs, professional contexts and user practices. We argue that, so far, user practices have been neglected or left behind in design, evaluation and recommendation scenarios, reducing them to the pure relational focus among kind of data, kind of charts and in lab tasks. This may underestimate the potential of the pragmatic side of this relation, where humans manipulate and interpret signs on the basis of their "practical knowledge, a factor that should be considered to improve human interactions with Data Visualization tools. The perspective discussed here would bring into light and help frame open problems such as interactions in routine tasks and the interpretation of data through visual interactive tools in daily professional practices. By proposing a light but formal model of investigation of these pragmatic interactions, we would like to contribute to the current debate around data visualization as the new strategic tool for dealing with the growing complexity of big data streams, digitization of life, sensor and hardware-embedded intelligence.
{"title":"Modelling Data Visualization Interactions: from Semiotics to Pragmatics and Back to Humans","authors":"P. Buono, A. Locoro","doi":"10.1145/3399715.3399903","DOIUrl":"https://doi.org/10.1145/3399715.3399903","url":null,"abstract":"This paper makes a point of current perspectives on Data Visualization research that were essentially conceived to provide guidelines for finding the best mapping between data and visual representations. Going back to foundational concepts of HCI that rely on manipulation of visual symbols, we propose a new perspective, with the aim to focus on a different configuration, that considers visual signs, professional contexts and user practices. We argue that, so far, user practices have been neglected or left behind in design, evaluation and recommendation scenarios, reducing them to the pure relational focus among kind of data, kind of charts and in lab tasks. This may underestimate the potential of the pragmatic side of this relation, where humans manipulate and interpret signs on the basis of their \"practical knowledge, a factor that should be considered to improve human interactions with Data Visualization tools. The perspective discussed here would bring into light and help frame open problems such as interactions in routine tasks and the interpretation of data through visual interactive tools in daily professional practices. By proposing a light but formal model of investigation of these pragmatic interactions, we would like to contribute to the current debate around data visualization as the new strategic tool for dealing with the growing complexity of big data streams, digitization of life, sensor and hardware-embedded intelligence.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rudy Berton, A. Kolasinska, O. Gaggi, C. Palazzi, Giacomo Quadrio
Even if the World Wide Web is one of the main content and service providers, unfortunately, these contents and services are not really available for everyone. People affected by impairments often have difficulties in navigating Web pages for a wide range of reasons. In this paper, we focus on people affected by dyslexia. These users experience difficulties in reading acquisition, despite normal intelligence and adequate access to conventional instruction. For this reason, we have created Help me read!, a Chrome extension that allows to change many features of a Web page. Furthermore, it allows to isolate and enlarge one word at a time. This feature is crucial as it allows people with dyslexia to focus on each single word, thus overcoming one of their main difficulties.
即使万维网是主要的内容和服务提供者之一,不幸的是,这些内容和服务并不是真正适用于每个人。由于各种各样的原因,受障碍影响的人在导航Web页面时经常遇到困难。在本文中,我们关注的是受阅读障碍影响的人。这些用户在阅读习得方面遇到困难,尽管他们智力正常,并有足够的机会接受传统教学。因此,我们创建了Help me read!,一个Chrome扩展,允许更改网页的许多功能。此外,它允许一次分离和放大一个单词。这个功能至关重要,因为它能让有阅读障碍的人专注于每个单词,从而克服他们的主要困难之一。
{"title":"A Chrome extension to help people with dyslexia","authors":"Rudy Berton, A. Kolasinska, O. Gaggi, C. Palazzi, Giacomo Quadrio","doi":"10.1145/3399715.3399843","DOIUrl":"https://doi.org/10.1145/3399715.3399843","url":null,"abstract":"Even if the World Wide Web is one of the main content and service providers, unfortunately, these contents and services are not really available for everyone. People affected by impairments often have difficulties in navigating Web pages for a wide range of reasons. In this paper, we focus on people affected by dyslexia. These users experience difficulties in reading acquisition, despite normal intelligence and adequate access to conventional instruction. For this reason, we have created Help me read!, a Chrome extension that allows to change many features of a Web page. Furthermore, it allows to isolate and enlarge one word at a time. This feature is crucial as it allows people with dyslexia to focus on each single word, thus overcoming one of their main difficulties.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"81 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128472284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}