Pub Date : 2024-08-13DOI: 10.1016/j.ijhcs.2024.103347
Rosella Gennari , Maristella Matera , Alessandra Melonio , Marco Mores , Diego Morra , Mehdi Rizvi
Micro-electronics tools, coupled with card-based tools, are employed for prototyping smart devices with non-experts. Lately, researchers have started investigating what tools can actively engage people with intellectual disabilities (ID) in their prototyping. This paper posits itself in this line of work. It presents a toolkit for ID people to rapidly prototype together their own ideas of smart things, for their own shared environment. It analyses and discusses engaging or disengaging features of the toolkit in light of the results of two workshops with eight ID participants. Lessons of broad interest for the design of similar toolkits are drawn from the literature and study findings.
微电子工具和基于卡片的工具被用于为非专业人员制作智能设备原型。最近,研究人员开始研究什么样的工具能让智障人士积极参与原型设计。本文就属于这一研究领域。它为智障人士提供了一个工具包,让他们在自己的共享环境中,快速将自己对智能事物的想法制作成原型。本文根据与八位 ID 参与者进行的两次研讨会的结果,分析并讨论了该工具包的吸引人或不吸引人之处。从文献和研究结果中汲取了对设计类似工具包具有广泛意义的经验教训。
{"title":"A rapid-prototyping toolkit for people with intellectual disabilities","authors":"Rosella Gennari , Maristella Matera , Alessandra Melonio , Marco Mores , Diego Morra , Mehdi Rizvi","doi":"10.1016/j.ijhcs.2024.103347","DOIUrl":"10.1016/j.ijhcs.2024.103347","url":null,"abstract":"<div><p>Micro-electronics tools, coupled with card-based tools, are employed for prototyping smart devices with non-experts. Lately, researchers have started investigating what tools can actively engage people with intellectual disabilities (ID) in their prototyping. This paper posits itself in this line of work. It presents a toolkit for ID people to rapidly prototype together their own ideas of smart things, for their own shared environment. It analyses and discusses engaging or disengaging features of the toolkit in light of the results of two workshops with eight ID participants. Lessons of broad interest for the design of similar toolkits are drawn from the literature and study findings.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103347"},"PeriodicalIF":5.3,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-10DOI: 10.1016/j.ijhcs.2024.103345
Weiwei Zhang , Jianing Yin , Ka I Chan , Tongxin Sun , Tongtong Jin , Jihong Jeung , Jiangtao Gong
Fall detection cameras at home can detect emergencies of older adults and send timely life-saving alerts. However, the equilibrium between privacy protection and life safety remains a controversial issue when using cameras. In this study, we assessed the attitudes of older adults towards the privacy issue of cameras using surveys (N=389) and interviews (N=20). Furthermore, we conducted a co-design workshop (N=6) in which older adults and designers collaborated to develop a prototype of cameras. We found that for older adults, the disclosure of privacy not only involves a leakage of personal information, but also influences their dignity and control, which has rarely been expressed directly in the past. Our results expand the conceptualisation of privacy and provide novel design implications for smart product development on privacy for older adults.
{"title":"Beyond digital privacy: Uncovering deeper attitudes toward privacy in cameras among older adults","authors":"Weiwei Zhang , Jianing Yin , Ka I Chan , Tongxin Sun , Tongtong Jin , Jihong Jeung , Jiangtao Gong","doi":"10.1016/j.ijhcs.2024.103345","DOIUrl":"10.1016/j.ijhcs.2024.103345","url":null,"abstract":"<div><p>Fall detection cameras at home can detect emergencies of older adults and send timely life-saving alerts. However, the equilibrium between privacy protection and life safety remains a controversial issue when using cameras. In this study, we assessed the attitudes of older adults towards the privacy issue of cameras using surveys (N=389) and interviews (N=20). Furthermore, we conducted a co-design workshop (N=6) in which older adults and designers collaborated to develop a prototype of cameras. We found that for older adults, the disclosure of privacy not only involves a leakage of personal information, but also influences their dignity and control, which has rarely been expressed directly in the past. Our results expand the conceptualisation of privacy and provide novel design implications for smart product development on privacy for older adults.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103345"},"PeriodicalIF":5.3,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1016/j.ijhcs.2024.103342
Jean-Philippe Rivière, Louis Vinet, Yannick Prié
Virtual Reality (VR) enables the low-cost production of realistic prototypes of buildings at early stages of architectural projects. Such prototypes may be used to gather the experiences of future users and iterate early on in the design. However, it is essential to evaluate whether what is experienced within such VR prototypes corresponds to what will be experienced in reality. Here, we use an innovative method to compare the experiences of patients in a real building and in a virtual environment that plays the role of a prototype that could have been created by architects during the design phase. We first designed and implemented a VR environment replicating an existing ambulatory pathway. Then, we used micro-phenomenological interviews to collect the experiences of real patients in the VR environment (n=8), along with VR traces and first-person point of view videos, and in the real ambulatory pathway (n=8). We modeled and normalized the experiences, and compared them systematically. Results suggest that patients live comparable experiences along various experiential dimensions such as thought, emotion, sensation, social and sensory perceptions, and that VR prototypes may be adequate to assess issues with architectural design. This work opens unique perspectives towards involving patients in User-Centered Design in architecture, though challenges lie ahead in how to design VR prototypes from early blueprints of architects.
{"title":"Towards the use of virtual reality prototypes in architecture to collect user experiences: An assessment of the comparability of patient experiences in a virtual and a real ambulatory pathway","authors":"Jean-Philippe Rivière, Louis Vinet, Yannick Prié","doi":"10.1016/j.ijhcs.2024.103342","DOIUrl":"10.1016/j.ijhcs.2024.103342","url":null,"abstract":"<div><p>Virtual Reality (VR) enables the low-cost production of realistic prototypes of buildings at early stages of architectural projects. Such prototypes may be used to gather the experiences of future users and iterate early on in the design. However, it is essential to evaluate whether what is experienced within such VR prototypes corresponds to what will be experienced in reality. Here, we use an innovative method to compare the experiences of patients in a real building and in a virtual environment that plays the role of a prototype that could have been created by architects during the design phase. We first designed and implemented a VR environment replicating an existing ambulatory pathway. Then, we used micro-phenomenological interviews to collect the experiences of real patients in the VR environment (n=8), along with VR traces and first-person point of view videos, and in the real ambulatory pathway (n=8). We modeled and normalized the experiences, and compared them systematically. Results suggest that patients live comparable experiences along various experiential dimensions such as thought, emotion, sensation, social and sensory perceptions, and that VR prototypes may be adequate to assess issues with architectural design. This work opens unique perspectives towards involving patients in User-Centered Design in architecture, though challenges lie ahead in how to design VR prototypes from early blueprints of architects.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103342"},"PeriodicalIF":5.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001253/pdfft?md5=e68b25d62c0e0391983e65bcbf8c4366&pid=1-s2.0-S1071581924001253-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-30DOI: 10.1016/j.ijhcs.2024.103341
Marta Ferreira, Nuno Nunes, Pedro Ferreira, Henrique Pereira, Valentina Nisi
This paper investigates the relationship between design research and humancomputer interaction (HCI) in the context of climate change communication and engagement. We discuss current practices in climate change communication and the decrease in concern and engagement caused by “crisis fatigue”. Through Research through Design (RtD), we set out to investigate data humanism and how users react to climate change data, testing approaches to improve engagement. With this purpose, we designed and evaluated Finding Arcadia, an interactive data story that uses data humanism to shift the dialogue from crisis-focused to action-focused. One study with the original IMF visualisations (N = 17) and two studies in public spaces (N = 12 and N = 64) point to the contextualization of the data and presenting actionable solutions helping in engaging users with climate change issues; help in creating solution-focused narratives and interpreting and relating with climate data. From these results, we derive insights for designing empowering interactive data visualizations for resilient climate change engagement.
本文以气候变化传播和参与为背景,探讨了设计研究与人机交互(HCI)之间的关系。我们讨论了气候变化传播的当前实践,以及 "危机疲劳 "导致的关注度和参与度下降。通过设计研究(RtD),我们着手调查数据人文主义以及用户如何对气候变化数据做出反应,测试提高参与度的方法。为此,我们设计并评估了 "寻找阿卡迪亚"(Finding Arcadia),这是一个互动数据故事,利用数据人文主义将对话从关注危机转向关注行动。一项关于原始 IMF 可视化的研究(N = 17)和两项关于公共空间的研究(N = 12 和 N = 64)表明,数据的背景化和提出可操作的解决方案有助于吸引用户参与气候变化问题;有助于创建以解决方案为重点的叙事以及解释气候数据并与之建立联系。从这些结果中,我们得出了设计赋权互动数据可视化的见解,以促进有弹性的气候变化参与。
{"title":"Connecting audiences with climate change: Towards humanised and action-focused data interactions","authors":"Marta Ferreira, Nuno Nunes, Pedro Ferreira, Henrique Pereira, Valentina Nisi","doi":"10.1016/j.ijhcs.2024.103341","DOIUrl":"10.1016/j.ijhcs.2024.103341","url":null,"abstract":"<div><p>This paper investigates the relationship between design research and humancomputer interaction (HCI) in the context of climate change communication and engagement. We discuss current practices in climate change communication and the decrease in concern and engagement caused by “crisis fatigue”. Through Research through Design (RtD), we set out to investigate data humanism and how users react to climate change data, testing approaches to improve engagement. With this purpose, we designed and evaluated <em>Finding Arcadia</em>, an interactive data story that uses data humanism to shift the dialogue from crisis-focused to action-focused. One study with the original IMF visualisations (<em>N</em> = 17) and two studies in public spaces (<em>N</em> = 12 and <em>N</em> = 64) point to the contextualization of the data and presenting actionable solutions helping in engaging users with climate change issues; help in creating solution-focused narratives and interpreting and relating with climate data. From these results, we derive insights for designing empowering interactive data visualizations for resilient climate change engagement.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103341"},"PeriodicalIF":5.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001241/pdfft?md5=a32ba3c151f3843aa67936d8aa9a2206&pid=1-s2.0-S1071581924001241-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1016/j.ijhcs.2024.103344
Siyuan Zhou , Xu Sun , Qingfeng Wang , Bingjian Liu , Gary Burnett
Considering that a significant portion of the current pedestrian population has limited exposure to automated vehicles (AVs), it is crucial to have a reliable instrument for assessing pedestrians’ initial trust in AVs. Using a survey of 436 pedestrians, this study developed and validated a PITQA (Pedestrians’ Initial Trust Questionnaire for AVs) scale using partial least squares structural equation modeling (PLS-SEM). The proposed scale will be valuable in monitoring the progression of trust over time and considering trust-related factors during the design process. The results revealed that seven key constructs significantly contribute to predicting initial trust between pedestrians and AVs. These constructs include propensity to trust, perceived statistical reliability, dependability and competence, perceived predictability, familiarity, authority/subversion, care/harm, and sanctity/degradation. These shed light on how the trust propensity of individuals, different trust/trustworthiness attributes might constitute different aspects of initial trust in the pedestrian-AV context. The developed scale can be a potentially useful tool for future research endeavors concerning trust calibration and the design of AVs specifically tailored for vulnerable road users.
{"title":"Development of a measurement instrument for pedestrians’ initial trust in automated vehicles","authors":"Siyuan Zhou , Xu Sun , Qingfeng Wang , Bingjian Liu , Gary Burnett","doi":"10.1016/j.ijhcs.2024.103344","DOIUrl":"10.1016/j.ijhcs.2024.103344","url":null,"abstract":"<div><p>Considering that a significant portion of the current pedestrian population has limited exposure to automated vehicles (AVs), it is crucial to have a reliable instrument for assessing pedestrians’ initial trust in AVs. Using a survey of 436 pedestrians, this study developed and validated a PITQA (Pedestrians’ Initial Trust Questionnaire for AVs) scale using partial least squares structural equation modeling (PLS-SEM). The proposed scale will be valuable in monitoring the progression of trust over time and considering trust-related factors during the design process. The results revealed that seven key constructs significantly contribute to predicting initial trust between pedestrians and AVs. These constructs include <em>propensity to trust, perceived statistical reliability, dependability and competence, perceived predictability, familiarity, authority/subversion, care/harm</em>, and <em>sanctity/degradation</em>. These shed light on how the trust propensity of individuals, different trust/trustworthiness attributes might constitute different aspects of initial trust in the pedestrian-AV context. The developed scale can be a potentially useful tool for future research endeavors concerning trust calibration and the design of AVs specifically tailored for vulnerable road users.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103344"},"PeriodicalIF":5.3,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001277/pdfft?md5=b2bb29c21ca7b7eafefd0b998c22c85c&pid=1-s2.0-S1071581924001277-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1016/j.ijhcs.2024.103340
Luca Turchet , Domenico Stefani , Johan Pauwels
The integration of emotion recognition capabilities within musical instruments can spur the emergence of novel art formats and services for musicians. This paper proposes the concept of emotionally-aware smart musical instruments, a class of musical devices embedding an artificial intelligence agent able to recognize the emotion contained in the musical signal. This spurs the emergence of novel services for musicians. Two prototypes of emotionally-aware smart piano and smart electric guitar were created, which embedded a recognition method for happiness, sadness, relaxation, aggressiveness and combination thereof. A user study, conducted with eleven pianists and eleven electric guitarists, revealed the strengths and limitations of the developed technology. On average musicians appreciated the proposed concept, who found its value in various musical activities. Most of participants tended to justify the system with respect to erroneous or partially erroneous classifications of the emotions they expressed, reporting to understand the reasons why a given output was produced. Some participants even seemed to trust more the system than their own judgments. Conversely, other participants requested to improve the accuracy, reliability and explainability of the system in order to achieve a higher degree of partnership with it. Our results suggest that, while desirable, perfect prediction of the intended emotion is not an absolute requirement for music emotion recognition to be useful in the construction of smart musical instruments.
{"title":"Musician-AI partnership mediated by emotionally-aware smart musical instruments","authors":"Luca Turchet , Domenico Stefani , Johan Pauwels","doi":"10.1016/j.ijhcs.2024.103340","DOIUrl":"10.1016/j.ijhcs.2024.103340","url":null,"abstract":"<div><p>The integration of emotion recognition capabilities within musical instruments can spur the emergence of novel art formats and services for musicians. This paper proposes the concept of emotionally-aware smart musical instruments, a class of musical devices embedding an artificial intelligence agent able to recognize the emotion contained in the musical signal. This spurs the emergence of novel services for musicians. Two prototypes of emotionally-aware smart piano and smart electric guitar were created, which embedded a recognition method for happiness, sadness, relaxation, aggressiveness and combination thereof. A user study, conducted with eleven pianists and eleven electric guitarists, revealed the strengths and limitations of the developed technology. On average musicians appreciated the proposed concept, who found its value in various musical activities. Most of participants tended to justify the system with respect to erroneous or partially erroneous classifications of the emotions they expressed, reporting to understand the reasons why a given output was produced. Some participants even seemed to trust more the system than their own judgments. Conversely, other participants requested to improve the accuracy, reliability and explainability of the system in order to achieve a higher degree of partnership with it. Our results suggest that, while desirable, perfect prediction of the intended emotion is not an absolute requirement for music emotion recognition to be useful in the construction of smart musical instruments.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103340"},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S107158192400123X/pdfft?md5=9a551949f1594cc78460b20e32ef1a41&pid=1-s2.0-S107158192400123X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1016/j.ijhcs.2024.103343
Muna Alebri , Enrico Costanza , Georgia Panagiotidou , Duncan P. Brumby , Fatima Althani , Riccardo Bovo
As visualisations reach a broad range of audiences, designing visualisations that attract and engage becomes more critical. Prior work suggests that semantic icons entice and immerse the reader; however, little is known about their impact with informational tasks and when the viewer’s attention is divided because of a distracting element. To address this gap, we first explored a variety of semantic icons with various visualisation attributes. The findings of this exploration shaped the design of our primary comparative online user studies, where participants saw a target visualisation with a distracting visualisation on a web page and were asked to extract insights. Their engagement was measured through three dependent variables: (1) visual attention, (2) effort to write insights, and (3) self-reported engagement. In Study 1, we discovered that visualisations with semantic icons were consistently perceived to be more engaging than the plain version. However, we found no differences in visual attention and effort between the two versions. Thus, we ran Study 2 using visualisations with more salient semantic icons to achieve maximum contrast. The results were consistent with our first Study. Furthermore, we found that semantic icons elevated engagement with visualisations depicting less interesting and engaging topics from the participant’s perspective. We extended prior work by demonstrating the semantic value after performing an informational task (extracting insights) and reflecting on the visualisation, besides its value to the first impression. Our findings may be helpful to visualisation designers and storytellers keen on designing engaging visualisations with limited resources. We also contribute reflections on engagement measurements with visualisations and provide future directions.
{"title":"Visualisations with semantic icons: Assessing engagement with distracting elements","authors":"Muna Alebri , Enrico Costanza , Georgia Panagiotidou , Duncan P. Brumby , Fatima Althani , Riccardo Bovo","doi":"10.1016/j.ijhcs.2024.103343","DOIUrl":"10.1016/j.ijhcs.2024.103343","url":null,"abstract":"<div><p>As visualisations reach a broad range of audiences, designing visualisations that attract and engage becomes more critical. Prior work suggests that semantic icons entice and immerse the reader; however, little is known about their impact with informational tasks and when the viewer’s attention is divided because of a distracting element. To address this gap, we first explored a variety of semantic icons with various visualisation attributes. The findings of this exploration shaped the design of our primary comparative online user studies, where participants saw a target visualisation with a distracting visualisation on a web page and were asked to extract insights. Their engagement was measured through three dependent variables: (1) visual attention, (2) effort to write insights, and (3) self-reported engagement. In Study 1, we discovered that visualisations with semantic icons were consistently perceived to be more engaging than the plain version. However, we found no differences in visual attention and effort between the two versions. Thus, we ran Study 2 using visualisations with more salient semantic icons to achieve maximum contrast. The results were consistent with our first Study. Furthermore, we found that semantic icons elevated engagement with visualisations depicting less interesting and engaging topics from the participant’s perspective. We extended prior work by demonstrating the semantic value after performing an informational task (extracting insights) and reflecting on the visualisation, besides its value to the first impression. Our findings may be helpful to visualisation designers and storytellers keen on designing engaging visualisations with limited resources. We also contribute reflections on engagement measurements with visualisations and provide future directions.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103343"},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001265/pdfft?md5=1b5dc2bccd837038da33997e5f1f4935&pid=1-s2.0-S1071581924001265-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1016/j.ijhcs.2024.103329
Abhraneil Dam , YeaJi Lee , Arsh Siddiqui , Wallace Santos Lages , Myounghoon Jeon
Augmenting visual art in art galleries can be an effective Audio Augmented Reality (AAR) application for indoor exploration. In the current study, eight paintings from four genres were augmented with audio through their sonification. Basic Audio was generated using a sonification algorithm by identifying the major colors of the paintings, and Enhanced Audio was generated by a musician enhancing the Basic Audio; these were presented with the paintings to compare against No Audio. Twenty-six participants viewed each painting in all three conditions; eye gaze metrics, and qualitative data were collected. Results showed that Enhanced Audio led to significantly greater engagement and positive sentiments, compared to Basic Audio. Thematic analysis showed semantic and syntactic relationships of the audio with the paintings, and a tendency to guide users’ gaze over time. Findings from this study can guide future AAR developments to improve auditory display designs to enhance visual experiences.
{"title":"Audio augmented reality using sonification to enhance visual art experiences: Lessons learned","authors":"Abhraneil Dam , YeaJi Lee , Arsh Siddiqui , Wallace Santos Lages , Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103329","DOIUrl":"10.1016/j.ijhcs.2024.103329","url":null,"abstract":"<div><p>Augmenting visual art in art galleries can be an effective Audio Augmented Reality (AAR) application for indoor exploration. In the current study, eight paintings from four genres were augmented with audio through their sonification. Basic Audio was generated using a sonification algorithm by identifying the major colors of the paintings, and Enhanced Audio was generated by a musician enhancing the Basic Audio; these were presented with the paintings to compare against No Audio. Twenty-six participants viewed each painting in all three conditions; eye gaze metrics, and qualitative data were collected. Results showed that Enhanced Audio led to significantly greater engagement and positive sentiments, compared to Basic Audio. Thematic analysis showed semantic and syntactic relationships of the audio with the paintings, and a tendency to guide users’ gaze over time. Findings from this study can guide future AAR developments to improve auditory display designs to enhance visual experiences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103329"},"PeriodicalIF":5.3,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141959856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-17DOI: 10.1016/j.ijhcs.2024.103339
Chiuhsiang Joe Lin, Susmitha Canny
This study investigates the effects of key sizes, typing angles, and typing techniques on typing productivity, biomechanics (muscle activity), and subjective experience with a mixed reality keyboard. The findings suggest that using smaller key sizes, such as 16 mm, may not be suitable due to slower typing speed, lower accuracy, lower user experience, higher muscle activity, and higher motion sickness. Typing with both index fingers results in the highest typing speed while using only a single index finger provides higher accuracy. Placing the keyboard at eye height leads to the highest typing speed, as participants can easily view the keys and the virtual environment simultaneously. However, typing accuracy is not affected by typing angle and typing technique. Implementing these findings on the virtual keyboard design could potentially benefit workers’ productivity and decrease errors in the mixed reality environment.
{"title":"Investigating the effect of key size, typing angle, and typing technique of virtual keyboard on typing productivity, biomechanics, and usability in a mixed reality environment","authors":"Chiuhsiang Joe Lin, Susmitha Canny","doi":"10.1016/j.ijhcs.2024.103339","DOIUrl":"10.1016/j.ijhcs.2024.103339","url":null,"abstract":"<div><p>This study investigates the effects of key sizes, typing angles, and typing techniques on typing productivity, biomechanics (muscle activity), and subjective experience with a mixed reality keyboard. The findings suggest that using smaller key sizes, such as 16 mm, may not be suitable due to slower typing speed, lower accuracy, lower user experience, higher muscle activity, and higher motion sickness. Typing with both index fingers results in the highest typing speed while using only a single index finger provides higher accuracy. Placing the keyboard at eye height leads to the highest typing speed, as participants can easily view the keys and the virtual environment simultaneously. However, typing accuracy is not affected by typing angle and typing technique. Implementing these findings on the virtual keyboard design could potentially benefit workers’ productivity and decrease errors in the mixed reality environment.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103339"},"PeriodicalIF":5.3,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1016/j.ijhcs.2024.103327
Yang Li , Juan Liu , Jin Huang , Yang Zhang , Xiaolan Peng , Yulong Bian , Feng Tian
Target selection is a crucial task in augmented reality (AR). Recent evidence suggests that user motion can significantly influence target selection. However, no systematic research has been conducted on target selection within varied intensity user motions and AR settings. This paper was carried out to investigate the effects of four user motions (i.e., standing, walking, running, and jumping) and two viewing modes (i.e., viewpoint-dependent and viewpoint-independent) on user performance of target selection in AR. Two typical selection techniques (i.e., virtual hand and ray-casting) were utilized for short-range and long-range selection tasks, respectively. Our results indicate that the target selection performance decreased as the intensity of user motion increased, and users demonstrated better performance in the viewpoint-independent mode than in the viewpoint-dependent mode. We also observed that users took a longer amount of time to select targets when using the ray-casting technique than the virtual hand technique. We conclude with a set of design guidelines to improve the AR target selection performance of users while in motion.
目标选择是增强现实(AR)中的一项重要任务。最近的证据表明,用户运动会对目标选择产生重大影响。然而,目前还没有针对不同强度的用户运动和 AR 设置下的目标选择进行过系统研究。本文研究了四种用户运动(即站立、行走、跑步和跳跃)和两种观看模式(即视点依赖型和视点无关型)对用户在 AR 中选择目标表现的影响。两种典型的选择技术(即虚拟手和光线投射)分别用于短距离和长距离选择任务。我们的研究结果表明,随着用户运动强度的增加,目标选择性能也随之降低,用户在视点无关模式下的表现要好于视点相关模式下的表现。我们还观察到,与虚拟手技术相比,用户在使用光线投射技术时需要更长的时间来选择目标。最后,我们提出了一套设计指南,以提高用户在运动中的 AR 目标选择性能。
{"title":"Evaluating the effects of user motion and viewing mode on target selection in augmented reality","authors":"Yang Li , Juan Liu , Jin Huang , Yang Zhang , Xiaolan Peng , Yulong Bian , Feng Tian","doi":"10.1016/j.ijhcs.2024.103327","DOIUrl":"10.1016/j.ijhcs.2024.103327","url":null,"abstract":"<div><p>Target selection is a crucial task in augmented reality (AR). Recent evidence suggests that user motion can significantly influence target selection. However, no systematic research has been conducted on target selection within varied intensity user motions and AR settings. This paper was carried out to investigate the effects of four user motions (i.e., standing, walking, running, and jumping) and two viewing modes (i.e., viewpoint-dependent and viewpoint-independent) on user performance of target selection in AR. Two typical selection techniques (i.e., virtual hand and ray-casting) were utilized for short-range and long-range selection tasks, respectively. Our results indicate that the target selection performance decreased as the intensity of user motion increased, and users demonstrated better performance in the viewpoint-independent mode than in the viewpoint-dependent mode. We also observed that users took a longer amount of time to select targets when using the ray-casting technique than the virtual hand technique. We conclude with a set of design guidelines to improve the AR target selection performance of users while in motion.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103327"},"PeriodicalIF":5.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141707376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}