Designing and developing innovative visualizations to assist humans in the process of generating and understanding complex semantic data has become an important element in supporting effective human-ontology interaction, as visual cues are likely to provide clarity, promote insight, and amplify cognition. While recent research has indicated potential benefits of applying novel adaptive technologies, typical ontology visualization techniques have traditionally followed a one-size-fits-all approach that often ignores an individual user's preferences, abilities, and visual needs. In an effort to realize adaptive ontology visualization, this paper presents a potential solution to predict a user's likely success and failure in real time, and prior to task completion, by applying established machine learning models on eye gaze generated during an interactive session. These predictions are envisioned to inform future adaptive ontology visualizations that could potentially adjust its visual cues or recommend alternative visualizations in real time to improve individual user success. This paper presents findings from a series of experiments to demonstrate the feasibility of gaze-based success and failure predictions in real time that can be achieved with a number of off-the-shelf classifiers without the need of expert configurations in the presence of mixed user backgrounds and task domains across two commonly used fundamental ontology visualization techniques.
{"title":"Impending Success or Failure? An Investigation of Gaze-Based User Predictions During Interaction with Ontology Visualizations","authors":"Bo Fu, B. Steichen","doi":"10.1145/3531073.3531081","DOIUrl":"https://doi.org/10.1145/3531073.3531081","url":null,"abstract":"Designing and developing innovative visualizations to assist humans in the process of generating and understanding complex semantic data has become an important element in supporting effective human-ontology interaction, as visual cues are likely to provide clarity, promote insight, and amplify cognition. While recent research has indicated potential benefits of applying novel adaptive technologies, typical ontology visualization techniques have traditionally followed a one-size-fits-all approach that often ignores an individual user's preferences, abilities, and visual needs. In an effort to realize adaptive ontology visualization, this paper presents a potential solution to predict a user's likely success and failure in real time, and prior to task completion, by applying established machine learning models on eye gaze generated during an interactive session. These predictions are envisioned to inform future adaptive ontology visualizations that could potentially adjust its visual cues or recommend alternative visualizations in real time to improve individual user success. This paper presents findings from a series of experiments to demonstrate the feasibility of gaze-based success and failure predictions in real time that can be achieved with a number of off-the-shelf classifiers without the need of expert configurations in the presence of mixed user backgrounds and task domains across two commonly used fundamental ontology visualization techniques.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122124366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parvaneh Parvin, Marco Manca, C. Senette, Maria Claudia Buzzi, M. Buzzi, S. Pelagatti
Oral health care can be a challenging experience for children with autism, for their parents and for dentists. Recently, some technology-enhanced systems have been proposed to help people with autism to cope with distressing situations originated by unknown social-life contexts, such as dental care settings with intense sound-visual stimulations. Results were positive in mitigating anxiety at the dental clinic but seem to fail in supporting proper oral hygiene at home. Thanks to the increasing spread of household Intelligent Personal Assistants (IPA) and Vocal Conversational Agents (VCAs), we envisage new opportunities considering the Voice-enabled IPAs not only as support on daily activities but also to enrich and simplify access to healthcare procedures from home. This work attempts to extend the use of technology-enhanced systems for dental care by exploiting the potential of the Vocal User Interface, Amazon Alexa, as an instructional agent with children on the spectrum. To this purpose, we developed a personalized Alexa Skill with two different functionalities: (i) support the child during the routine transition toward the target activity: move to the bathroom to brush their teeth; (ii) act as a persuader and a timer to guide the child during the procedure observing the proper brushing time. We conducted a three-week preliminary study with three children of different autistic profiles. The goal was to collect opportunities and issues deriving from the device introduction in the home context and test the device usage to favour dental care. Results and feedback were encouraging and gave insights to improve this approach.
{"title":"Alexism: ALEXa supporting children with autISM in their oral care at home","authors":"Parvaneh Parvin, Marco Manca, C. Senette, Maria Claudia Buzzi, M. Buzzi, S. Pelagatti","doi":"10.1145/3531073.3531157","DOIUrl":"https://doi.org/10.1145/3531073.3531157","url":null,"abstract":"Oral health care can be a challenging experience for children with autism, for their parents and for dentists. Recently, some technology-enhanced systems have been proposed to help people with autism to cope with distressing situations originated by unknown social-life contexts, such as dental care settings with intense sound-visual stimulations. Results were positive in mitigating anxiety at the dental clinic but seem to fail in supporting proper oral hygiene at home. Thanks to the increasing spread of household Intelligent Personal Assistants (IPA) and Vocal Conversational Agents (VCAs), we envisage new opportunities considering the Voice-enabled IPAs not only as support on daily activities but also to enrich and simplify access to healthcare procedures from home. This work attempts to extend the use of technology-enhanced systems for dental care by exploiting the potential of the Vocal User Interface, Amazon Alexa, as an instructional agent with children on the spectrum. To this purpose, we developed a personalized Alexa Skill with two different functionalities: (i) support the child during the routine transition toward the target activity: move to the bathroom to brush their teeth; (ii) act as a persuader and a timer to guide the child during the procedure observing the proper brushing time. We conducted a three-week preliminary study with three children of different autistic profiles. The goal was to collect opportunities and issues deriving from the device introduction in the home context and test the device usage to favour dental care. Results and feedback were encouraging and gave insights to improve this approach.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122270129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuoming Zhang, Jessalyn Alvina, F. Détienne, É. Lecolinet
Smartphones’ screens are touch-sensitive and offer rich visual output capabilities, but do not allow users to control a wide range of pressure input nor provide rich pressure sensations. We propose In-Flat, an input/output pressure-sensitive overlay for smartphones. In-Flat consists of a transparent inflatable skin-like silicon layer that can be placed on the top or the back of a smartphone. As an output device, In-Flat offers tangible affordances and dynamic pressure feedback coupled with visual display. As an input device, In-Flat enables users to continuously perform a wide range of input gestures, notably press and pinch-and-pull gestures. Thus, In-Flat can be used to finely manipulate visual objects in mobile interaction or mediate interpersonal touch communications. In contrast to previous studies that mostly focus on press, we investigated the performance of pinch-and-pull and compared it with press. Our experiment (N=12) showed that participants could perform pinch-and-pull (83.8%) as well as press (84.7%), but felt having more control when performing pinch-and-pull. We explored the use of In-Flat to enable multimodal interaction that couples visual display and touch input/output. Participants appreciated this coupling as well as the touch sensation supplied by the In-Flat device.
{"title":"Pulling, Pressing, and Sensing with In-Flat: Transparent Touch Overlay for Smartphones","authors":"Zhuoming Zhang, Jessalyn Alvina, F. Détienne, É. Lecolinet","doi":"10.1145/3531073.3531111","DOIUrl":"https://doi.org/10.1145/3531073.3531111","url":null,"abstract":"Smartphones’ screens are touch-sensitive and offer rich visual output capabilities, but do not allow users to control a wide range of pressure input nor provide rich pressure sensations. We propose In-Flat, an input/output pressure-sensitive overlay for smartphones. In-Flat consists of a transparent inflatable skin-like silicon layer that can be placed on the top or the back of a smartphone. As an output device, In-Flat offers tangible affordances and dynamic pressure feedback coupled with visual display. As an input device, In-Flat enables users to continuously perform a wide range of input gestures, notably press and pinch-and-pull gestures. Thus, In-Flat can be used to finely manipulate visual objects in mobile interaction or mediate interpersonal touch communications. In contrast to previous studies that mostly focus on press, we investigated the performance of pinch-and-pull and compared it with press. Our experiment (N=12) showed that participants could perform pinch-and-pull (83.8%) as well as press (84.7%), but felt having more control when performing pinch-and-pull. We explored the use of In-Flat to enable multimodal interaction that couples visual display and touch input/output. Participants appreciated this coupling as well as the touch sensation supplied by the In-Flat device.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126962283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I will present our work into improving passenger journeys using immersive Virtual and Augmented Reality (together XR) to support entertainment, work and collaboration on the move. In Europe, people travel an average of 12,000km per year on private and public transport, in cars, buses, planes and trains. These journeys are often repetitive and wasted time. This total will rise with the arrival of fully autonomous cars, which free drivers to become passengers. The potential to recover this lost time is impeded by 3 significant challenges: XR headsets could allow passengers to use their travel time in new, productive ways, but only if these fundamental challenges can be overcome. Passengers would be able to use large virtual displays for productivity; escape the physical confines of the vehicle and become immersed in virtual experiences; and communicate with distant others through new embodied forms of communication. I will discuss our solutions to these challenges, focusing on the visual aspects. We are: developing new interaction techniques for VR and AR that can work in confined, seated spaces; supporting safe, socially acceptable use of XR providing awareness of others and the travel environment; and overcoming motion sickness using multimodal countermeasures to support these novel immersive experiences.
{"title":"eXtended Reality and Passengers of the Future","authors":"S. Brewster","doi":"10.1145/3531073.3538399","DOIUrl":"https://doi.org/10.1145/3531073.3538399","url":null,"abstract":"I will present our work into improving passenger journeys using immersive Virtual and Augmented Reality (together XR) to support entertainment, work and collaboration on the move. In Europe, people travel an average of 12,000km per year on private and public transport, in cars, buses, planes and trains. These journeys are often repetitive and wasted time. This total will rise with the arrival of fully autonomous cars, which free drivers to become passengers. The potential to recover this lost time is impeded by 3 significant challenges: XR headsets could allow passengers to use their travel time in new, productive ways, but only if these fundamental challenges can be overcome. Passengers would be able to use large virtual displays for productivity; escape the physical confines of the vehicle and become immersed in virtual experiences; and communicate with distant others through new embodied forms of communication. I will discuss our solutions to these challenges, focusing on the visual aspects. We are: developing new interaction techniques for VR and AR that can work in confined, seated spaces; supporting safe, socially acceptable use of XR providing awareness of others and the travel environment; and overcoming motion sickness using multimodal countermeasures to support these novel immersive experiences.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115588980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lisa-Maria Freller, Mandy Keck, Thomas Neumayr, Mirjam Augstein
When designing the interface of a recommender system, interactive visualizations can be used to support challenges such as transparency and controllability, and thus increasing the user’s trust. In this paper, we propose a construction kit for visual recommender systems that can be used to construct new solutions or deconstruct existing approaches to identify and analyze key aspects.
{"title":"Towards a Construction Kit for Visual Recommender Systems","authors":"Lisa-Maria Freller, Mandy Keck, Thomas Neumayr, Mirjam Augstein","doi":"10.1145/3531073.3534484","DOIUrl":"https://doi.org/10.1145/3531073.3534484","url":null,"abstract":"When designing the interface of a recommender system, interactive visualizations can be used to support challenges such as transparency and controllability, and thus increasing the user’s trust. In this paper, we propose a construction kit for visual recommender systems that can be used to construct new solutions or deconstruct existing approaches to identify and analyze key aspects.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134301384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Esposito, Giuseppe Desolda, R. Lanzilotti, M. Costabile
This demo presents SERENE, a Web platform for the UX semi-automatic evaluation of websites. It exploits Artificial Intelligence to predict visitors’ emotions starting from their interaction logs. The predicted emotions are shown by interactive heatmaps overlapped to the webpage to be analyzed. The concentration of negative emotions in a specific area of the webpage can help the UX experts identify UX problems.
{"title":"SERENE: a Web platform for the UX semi-automatic evaluation of website","authors":"Andrea Esposito, Giuseppe Desolda, R. Lanzilotti, M. Costabile","doi":"10.1145/3531073.3534464","DOIUrl":"https://doi.org/10.1145/3531073.3534464","url":null,"abstract":"This demo presents SERENE, a Web platform for the UX semi-automatic evaluation of websites. It exploits Artificial Intelligence to predict visitors’ emotions starting from their interaction logs. The predicted emotions are shown by interactive heatmaps overlapped to the webpage to be analyzed. The concentration of negative emotions in a specific area of the webpage can help the UX experts identify UX problems.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"50 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133648241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eva Geurts, Gustavo Rovelo Ruiz, K. Luyten, Steven Houben, B. Weyers, An Jacobs, Philippe A. Palanque
Operators’ well-being is a key factor for the success of industrial production processes. Even though research has studied the well-being aspects of the industry, such as support and improvement of ergonomics, there is still a long way to go to achieve a sustainable and healthy work context for manufacturing industry. We believe the Human-Computer Interaction community can contribute by developing research on worker well-being in real-life settings. This workshop intends to offer a venue for HCI researchers that focus on worker well-being for the manufacturing industry and other industry domains.
{"title":"HCI and worker well-being in manufacturing industry","authors":"Eva Geurts, Gustavo Rovelo Ruiz, K. Luyten, Steven Houben, B. Weyers, An Jacobs, Philippe A. Palanque","doi":"10.1145/3531073.3535257","DOIUrl":"https://doi.org/10.1145/3531073.3535257","url":null,"abstract":"Operators’ well-being is a key factor for the success of industrial production processes. Even though research has studied the well-being aspects of the industry, such as support and improvement of ergonomics, there is still a long way to go to achieve a sustainable and healthy work context for manufacturing industry. We believe the Human-Computer Interaction community can contribute by developing research on worker well-being in real-life settings. This workshop intends to offer a venue for HCI researchers that focus on worker well-being for the manufacturing industry and other industry domains.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134317707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present REC12, a Unity open-source tool for developers and researchers to record, export and replay the movements of virtual reality users and virtual characters. Recording both real-time body tracking and animations, this tool makes it possible to export skeletal data in a comma-separated values (CSV) file allowing to process and analyze users’ movements. We also provide a feature to reload and replay saved movements directly from the CSV file on virtual characters in a 3D environment.
{"title":"REC: A Unity Tool to Replay, Export and Capture Tracked Movements for 3D and Virtual Reality Applications","authors":"G. Gorisse, O. Christmann, C. Dubosc","doi":"10.1145/3531073.3534472","DOIUrl":"https://doi.org/10.1145/3531073.3534472","url":null,"abstract":"We present REC12, a Unity open-source tool for developers and researchers to record, export and replay the movements of virtual reality users and virtual characters. Recording both real-time body tracking and animations, this tool makes it possible to export skeletal data in a comma-separated values (CSV) file allowing to process and analyze users’ movements. We also provide a feature to reload and replay saved movements directly from the CSV file on virtual characters in a 3D environment.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"156-157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133061044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Mathis, Joseph O'Hagan, Kami Vaniea, M. Khamis
Evaluating interactive systems often requires researchers to invite user study participants to the lab. However, corresponding evaluations often lack realism and participants are usually recruited from a local area only. In this work, we propose Remote Virtual Reality for simulating Real-world Research (RVR3) to evaluate novel real-world authentication prototypes. A user study (N=25) demonstrates the feasibility of using VR for remote usability research on simulated real-world prototypes. Our remote VR user study provides a glimpse into the usability and social acceptability of two novel authentication systems: Hand Menu and Tap. We build on prior research in this space and discuss the impact RVR3 studies have on the range of possible studies. In summary, our remote VR research method to design, implement, and evaluate interactive real-world prototypes is a next step towards moving human-centred research out of the lab and potentially reaching a more diverse and larger participant sample over time.
{"title":"Stay Home! Conducting Remote Usability Evaluations of Novel Real-World Authentication Systems Using Virtual Reality","authors":"Florian Mathis, Joseph O'Hagan, Kami Vaniea, M. Khamis","doi":"10.1145/3531073.3531087","DOIUrl":"https://doi.org/10.1145/3531073.3531087","url":null,"abstract":"Evaluating interactive systems often requires researchers to invite user study participants to the lab. However, corresponding evaluations often lack realism and participants are usually recruited from a local area only. In this work, we propose Remote Virtual Reality for simulating Real-world Research (RVR3) to evaluate novel real-world authentication prototypes. A user study (N=25) demonstrates the feasibility of using VR for remote usability research on simulated real-world prototypes. Our remote VR user study provides a glimpse into the usability and social acceptability of two novel authentication systems: Hand Menu and Tap. We build on prior research in this space and discuss the impact RVR3 studies have on the range of possible studies. In summary, our remote VR research method to design, implement, and evaluate interactive real-world prototypes is a next step towards moving human-centred research out of the lab and potentially reaching a more diverse and larger participant sample over time.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116008869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angeliki Antoniou, B. D. Carolis, T. Kuflik, A. Origlia, G. Raptis, Cristina Gena
AVI-CH is the 14th workshop in the series of PATCH workshops, since 2007 and the 4th in a row at AVI. It is the meeting place for researchers and practitioners focusing on the application of advanced information and communication technology (ICT) in cultural heritage with a specific focus on user interfaces, visualization and interaction. This year, eight papers were submitted by researchers from Greece, Italy and Israel. All were accepted.
{"title":"AVI-CH 2022: Workshop on Advanced Visual Interfaces and Interactions in Cultural Heritage","authors":"Angeliki Antoniou, B. D. Carolis, T. Kuflik, A. Origlia, G. Raptis, Cristina Gena","doi":"10.1145/3531073.3535259","DOIUrl":"https://doi.org/10.1145/3531073.3535259","url":null,"abstract":"AVI-CH is the 14th workshop in the series of PATCH workshops, since 2007 and the 4th in a row at AVI. It is the meeting place for researchers and practitioners focusing on the application of advanced information and communication technology (ICT) in cultural heritage with a specific focus on user interfaces, visualization and interaction. This year, eight papers were submitted by researchers from Greece, Italy and Israel. All were accepted.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122184342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}