Liangding Li, Stephanie Carnell, Katherine Harris, Linda J. Walters, D. Reiners, C. Cruz-Neira
Our paper presents LIFT, a system that enables educators to create immersive virtual field trip experiences for their students. LIFT overcomes the challenges of enabling non-technical educators to create their own content and allows educators to act as guides during the immersive experience. The system combines live-streamed 360° video, 3D models, and live instruction to create collaborative virtual field trips. To evaluate LIFT, we developed a field trip with biology educators from the University of Central Florida(UCF) and showcased it at a science festival. Our results suggest that LIFT can help educators create immersive educational content while out in the field. However, our pilot observational study at the museum highlighted the need for further research to explore the instructional design of mixed immersive content created with LIFT. Overall, our work provides an application development framework for educators to create immersive, hands-on field trip experiences.
我们的论文介绍LIFT,一个系统,使教育工作者创造身临其境的虚拟实地考察经验,为他们的学生。LIFT克服了使非技术教育工作者能够创建自己的内容的挑战,并允许教育工作者在沉浸式体验中充当向导。该系统结合了360°直播视频、3D模型和现场指导,创建协作式虚拟实地考察。为了评估LIFT,我们与中佛罗里达大学(University of Central Florida, UCF)的生物学教育工作者进行了一次实地考察,并在一个科学节上展示了它。我们的研究结果表明,LIFT可以帮助教育工作者在户外创造沉浸式的教育内容。然而,我们在博物馆的试点观察研究强调了进一步研究的必要性,以探索使用LIFT创建的混合沉浸式内容的教学设计。总的来说,我们的工作为教育工作者提供了一个应用程序开发框架,以创建身临其境的动手实地考察体验。
{"title":"LIFT - A System to Create Mixed 360° Video and 3D Content for Live Immersive Virtual Field Trip","authors":"Liangding Li, Stephanie Carnell, Katherine Harris, Linda J. Walters, D. Reiners, C. Cruz-Neira","doi":"10.1145/3573381.3596162","DOIUrl":"https://doi.org/10.1145/3573381.3596162","url":null,"abstract":"Our paper presents LIFT, a system that enables educators to create immersive virtual field trip experiences for their students. LIFT overcomes the challenges of enabling non-technical educators to create their own content and allows educators to act as guides during the immersive experience. The system combines live-streamed 360° video, 3D models, and live instruction to create collaborative virtual field trips. To evaluate LIFT, we developed a field trip with biology educators from the University of Central Florida(UCF) and showcased it at a science festival. Our results suggest that LIFT can help educators create immersive educational content while out in the field. However, our pilot observational study at the museum highlighted the need for further research to explore the instructional design of mixed immersive content created with LIFT. Overall, our work provides an application development framework for educators to create immersive, hands-on field trip experiences.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126669884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The production of immersive media often involves 360-degree viewing on mobile or immersive VR devices, particularly in the field of immersive journalism. However, it is unclear how the different technologies used to present such media affect the experience of presence. To investigate this, a laboratory experiment was conducted with 87 participants who were assigned to one of three conditions: HMD-360, Monitor-360, or Monitor-article, representing three distinct levels of technological immersion. All three conditions represented the same base content, with high and mid-immersion featuring a panoramic 360-video and low-immersion presenting an article composed of a transcript and video stills. The study found that presence could be considered a composite of Involvement, Naturalness, Location, and Distraction. Mid- and high-immersion conditions elicited both higher Involvement and higher Distraction compared to low immersion. Furthermore, the participants’ propensity for psychological immersion maximized the effects of technological immersion, but only through the aspect of Involvement. In conclusion, the study sheds light on how different technologies used to present immersive media affect the experience of presence and suggests that higher technological immersiveness does not necessarily result in a higher reported presence.
{"title":"More Immersed but Less Present: Unpacking Factors of Presence Across Devices","authors":"Mila Bujić, M. Salminen, Juho Hamari","doi":"10.1145/3573381.3596152","DOIUrl":"https://doi.org/10.1145/3573381.3596152","url":null,"abstract":"The production of immersive media often involves 360-degree viewing on mobile or immersive VR devices, particularly in the field of immersive journalism. However, it is unclear how the different technologies used to present such media affect the experience of presence. To investigate this, a laboratory experiment was conducted with 87 participants who were assigned to one of three conditions: HMD-360, Monitor-360, or Monitor-article, representing three distinct levels of technological immersion. All three conditions represented the same base content, with high and mid-immersion featuring a panoramic 360-video and low-immersion presenting an article composed of a transcript and video stills. The study found that presence could be considered a composite of Involvement, Naturalness, Location, and Distraction. Mid- and high-immersion conditions elicited both higher Involvement and higher Distraction compared to low immersion. Furthermore, the participants’ propensity for psychological immersion maximized the effects of technological immersion, but only through the aspect of Involvement. In conclusion, the study sheds light on how different technologies used to present immersive media affect the experience of presence and suggests that higher technological immersiveness does not necessarily result in a higher reported presence.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117328698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charlotte Scarpa, G. Haese, Toinon Vigier, P. Le Callet
The building sector and the indoor environment conception is undergoing major changes. There is a need to reconsider the way offices are built from a user’s centric point of view. Research has shown the influence of perceived comfort and satisfaction on performance in the workplace. By understanding how multi-sensory information is integrated into the nervous system and which environmental parameters influence the most perception, it could be possible to improve work environments. With the emergence of new virtual reality (VR) and augmented reality (AR) technologies, the collection and processing of sensory information is rapidly advancing, moving forward more dynamic aspects of sensory perception. Through simulated environments, environmental parameters can be easily manipulated at reasonable costs, allowing control and guiding the user’s sensory experience. Moreover, the effects of contextual and surrounding stimuli on users can be easily collected throughout the test, in the form of physiological and behavioral data. Through the use of indoor simulations, this doctoral research goal is to develop a multi-criteria comfort scale based on physiological indicators under performance constraints. In doing this, it would be possible to define new quality indicators combining the different physical factors adapted to the uses and space. In order to achieve the objectives of this project, the first step is to develop and validate an immersive and interactive methodology for the assessment of multisensory information on comfort and performance in work environments.
{"title":"Construction of immersive and interactive methodology based on physiological indicators to subjectively and objectively assess comfort and performances in work offices","authors":"Charlotte Scarpa, G. Haese, Toinon Vigier, P. Le Callet","doi":"10.1145/3573381.3597233","DOIUrl":"https://doi.org/10.1145/3573381.3597233","url":null,"abstract":"The building sector and the indoor environment conception is undergoing major changes. There is a need to reconsider the way offices are built from a user’s centric point of view. Research has shown the influence of perceived comfort and satisfaction on performance in the workplace. By understanding how multi-sensory information is integrated into the nervous system and which environmental parameters influence the most perception, it could be possible to improve work environments. With the emergence of new virtual reality (VR) and augmented reality (AR) technologies, the collection and processing of sensory information is rapidly advancing, moving forward more dynamic aspects of sensory perception. Through simulated environments, environmental parameters can be easily manipulated at reasonable costs, allowing control and guiding the user’s sensory experience. Moreover, the effects of contextual and surrounding stimuli on users can be easily collected throughout the test, in the form of physiological and behavioral data. Through the use of indoor simulations, this doctoral research goal is to develop a multi-criteria comfort scale based on physiological indicators under performance constraints. In doing this, it would be possible to define new quality indicators combining the different physical factors adapted to the uses and space. In order to achieve the objectives of this project, the first step is to develop and validate an immersive and interactive methodology for the assessment of multisensory information on comfort and performance in work environments.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114212086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Fernández-Dasí, Mario Montagud Climent, Isaac Fraile, Josep Paradells, S. Fernández
This paper reports on the research being done towards enabling and understanding interactive social VR360 video viewing scenarios, by exclusively relying on web-based technologies, and using different types of consumption devices. After motivating the relevance of the research topic and associated impact, the paper elaborates on key requirements, features, and system components to effectively enable such scenarios, such as: adaptive and low-latency streaming, media synchronization, social presence, interaction channels, and assistive methods. For each of these features and components, different alternatives are assessed and proof of concept implementations are being provided. With an effective combination and integration of all these contributions, an end-to-end platform can be built and used as a research framework to explore the applicability and potential benefits of social VR360viewing in a variety of use cases, like education, culture or surveillance, by tailoring the technological components based on lessons learned from experimental studies. These use case studies can also provide relevant insights into activity patterns, behaviors, and preferences in Social Viewing scenarios.
{"title":"Enabling and Understanding Interactive Social VR360 Video Viewing","authors":"Miguel Fernández-Dasí, Mario Montagud Climent, Isaac Fraile, Josep Paradells, S. Fernández","doi":"10.1145/3573381.3597216","DOIUrl":"https://doi.org/10.1145/3573381.3597216","url":null,"abstract":"This paper reports on the research being done towards enabling and understanding interactive social VR360 video viewing scenarios, by exclusively relying on web-based technologies, and using different types of consumption devices. After motivating the relevance of the research topic and associated impact, the paper elaborates on key requirements, features, and system components to effectively enable such scenarios, such as: adaptive and low-latency streaming, media synchronization, social presence, interaction channels, and assistive methods. For each of these features and components, different alternatives are assessed and proof of concept implementations are being provided. With an effective combination and integration of all these contributions, an end-to-end platform can be built and used as a research framework to explore the applicability and potential benefits of social VR360viewing in a variety of use cases, like education, culture or surveillance, by tailoring the technological components based on lessons learned from experimental studies. These use case studies can also provide relevant insights into activity patterns, behaviors, and preferences in Social Viewing scenarios.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124876530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this project is to create an interactive assistant that incorporates different assistive features for blind and visually impaired people. The assistant might incorporate screen readers, magnifiers, voice synthesis, OCR, GPS, face recognition, and object recognition among other tools. Recently, the work done by OpenAI and Be My Eyes with the implementation of GPT-4 is comparable to the aim of this project. It shows the development of an interactive assistant has become simpler due to recent developments in large language models. However, older methods like named entity recognition and intent classification are still valuable to build lightweight assistants. A hybrid solution combining both methods seems possible, would help to reduce the computational cost of the assistant, and would facilitate the data collection process. Despite being more complex to implement in a multilingual and multimodal context, a hybrid solution has the potential to be used offline and to consume less resources.
这个项目的目的是为盲人和视障人士创造一个包含不同辅助功能的互动助手。该助手可能包含屏幕阅读器、放大镜、语音合成、OCR、GPS、人脸识别和物体识别等工具。最近,OpenAI和Be My Eyes在实施GPT-4方面所做的工作与这个项目的目标相当。它表明,由于最近大型语言模型的发展,交互式助手的开发变得更加简单。然而,像命名实体识别和意图分类这样的老方法对于构建轻量级助手仍然很有价值。结合这两种方法的混合解决方案似乎是可能的,这将有助于减少助手的计算成本,并将促进数据收集过程。尽管在多语言和多模式上下文中实现起来更加复杂,但混合解决方案具有离线使用和消耗更少资源的潜力。
{"title":"Developing an Interactive Agent for Blind and Visually Impaired People","authors":"V. Stragier, Omar Seddati, T. Dutoit","doi":"10.1145/3573381.3596471","DOIUrl":"https://doi.org/10.1145/3573381.3596471","url":null,"abstract":"The aim of this project is to create an interactive assistant that incorporates different assistive features for blind and visually impaired people. The assistant might incorporate screen readers, magnifiers, voice synthesis, OCR, GPS, face recognition, and object recognition among other tools. Recently, the work done by OpenAI and Be My Eyes with the implementation of GPT-4 is comparable to the aim of this project. It shows the development of an interactive assistant has become simpler due to recent developments in large language models. However, older methods like named entity recognition and intent classification are still valuable to build lightweight assistants. A hybrid solution combining both methods seems possible, would help to reduce the computational cost of the assistant, and would facilitate the data collection process. Despite being more complex to implement in a multilingual and multimodal context, a hybrid solution has the potential to be used offline and to consume less resources.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128499600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audiovisual media is an integral part of many people’s everyday lives. People with accessibility needs, especially people with complex accessibility needs, however, may face challenges accessing this content. This doctoral work addresses this problem by investigating how complex accessibility needs can be met by content personalisation by leveraging data-driven methods. To this end, I will collaborate with people with aphasia, a complex language impairment, as an exemplar community of people with complex accessibility needs. To better understand the needs of people with aphasia, I will use collaborative design techniques to meet the needs of end users. This will involve them in the design, development and evaluation of systems that demonstrate the benefits of content personalisation as an accessibility intervention. This paper outlines the background and motivation to this PhD, the work that has already been completed, and current planned future work.
{"title":"Object-Based Access: Enhancing Accessibility with Data-Driven Media","authors":"Alexandre Nevsky","doi":"10.1145/3573381.3596500","DOIUrl":"https://doi.org/10.1145/3573381.3596500","url":null,"abstract":"Audiovisual media is an integral part of many people’s everyday lives. People with accessibility needs, especially people with complex accessibility needs, however, may face challenges accessing this content. This doctoral work addresses this problem by investigating how complex accessibility needs can be met by content personalisation by leveraging data-driven methods. To this end, I will collaborate with people with aphasia, a complex language impairment, as an exemplar community of people with complex accessibility needs. To better understand the needs of people with aphasia, I will use collaborative design techniques to meet the needs of end users. This will involve them in the design, development and evaluation of systems that demonstrate the benefits of content personalisation as an accessibility intervention. This paper outlines the background and motivation to this PhD, the work that has already been completed, and current planned future work.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122898774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accessing match statistics through second screen while watching soccer matches on TV has grown into a popular practice. Although early works have shown how gestures on touch screens performed under distracting environments, little is known regarding how specific gestures (swiping and tapping) to retrieve information on second screen affect the viewing experience of soccer games on TV. For this, a mixed-method user study, which included prototype tests of watching short clips of a soccer match, questionnaires and short interviews, was conducted with 28 participants. The results revealed that the number of people who preferred tapping was more than the number of people who favored swiping under two different second screen activity time scenarios i.e. On-Play or Off-Play. However, neither swiping nor tapping yield better performance of recalling verbatim match stats and exact comparisons in both On-Play and Off-Play. Participant evaluations in On-Play and interviews give us clues regarding such difference.
{"title":"Tap or Swipe? Effects of Interaction Gestures for Retrieval of Match Statistics via Second Screen on Watching Soccer on TV","authors":"Ege Sezen, Emmanuel Tsekleves, A. Mauthe","doi":"10.1145/3573381.3596473","DOIUrl":"https://doi.org/10.1145/3573381.3596473","url":null,"abstract":"Accessing match statistics through second screen while watching soccer matches on TV has grown into a popular practice. Although early works have shown how gestures on touch screens performed under distracting environments, little is known regarding how specific gestures (swiping and tapping) to retrieve information on second screen affect the viewing experience of soccer games on TV. For this, a mixed-method user study, which included prototype tests of watching short clips of a soccer match, questionnaires and short interviews, was conducted with 28 participants. The results revealed that the number of people who preferred tapping was more than the number of people who favored swiping under two different second screen activity time scenarios i.e. On-Play or Off-Play. However, neither swiping nor tapping yield better performance of recalling verbatim match stats and exact comparisons in both On-Play and Off-Play. Participant evaluations in On-Play and interviews give us clues regarding such difference.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116088282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Robotham, Ashutosh Singla, A. Raake, Olli S. Rummukainen, Emanuël Habets
This study uses a mixed between- and within-subjects test design to evaluate the influence of interactive formats on the quality of binaurally rendered 360° spatial audio content. Focusing on ecological validity using real-world recordings of 60 s duration, three independent groups of subjects () were exposed to three formats: audio only (A), audio with 2D visuals (A2DV), and audio with head-mounted display (AHMD) visuals. Within each interactive format, two sessions were conducted to evaluate degraded audio conditions: bit-rate and Ambisonics order. Our results show a statistically significant effect (p < .05) of format only on spatial audio quality ratings for Ambisonics order. Exploration data analysis shows that format A yields little variability in exploration, while formats A2DV and AHMD yield broader viewing distribution of 360° content. The results imply audio quality factors can be optimized depending on the interactive format.
{"title":"Influence of Multi-Modal Interactive Formats on Subjective Audio Quality and Exploration Behavior","authors":"T. Robotham, Ashutosh Singla, A. Raake, Olli S. Rummukainen, Emanuël Habets","doi":"10.1145/3573381.3596155","DOIUrl":"https://doi.org/10.1145/3573381.3596155","url":null,"abstract":"This study uses a mixed between- and within-subjects test design to evaluate the influence of interactive formats on the quality of binaurally rendered 360° spatial audio content. Focusing on ecological validity using real-world recordings of 60 s duration, three independent groups of subjects () were exposed to three formats: audio only (A), audio with 2D visuals (A2DV), and audio with head-mounted display (AHMD) visuals. Within each interactive format, two sessions were conducted to evaluate degraded audio conditions: bit-rate and Ambisonics order. Our results show a statistically significant effect (p < .05) of format only on spatial audio quality ratings for Ambisonics order. Exploration data analysis shows that format A yields little variability in exploration, while formats A2DV and AHMD yield broader viewing distribution of 360° content. The results imply audio quality factors can be optimized depending on the interactive format.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127704224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration aims at presenting ScenaConnect, a multisensory device which allows people to live various several multisensory experiences. ScenaConnect is inexpensive, compact, easy to install and allows to improve experiences in added new interactions. The demonstration will present two cases of use. The first one is an interactive math exercise and the second one is a multisensory experience that will take the visitor on a journey through history. Moreover, ScenaConnect could be used in museums for immersive and interactive experiences or by a teacher who can use it to make the learning of his students more interactive and adapted. The perspectives are to allows non-expert in computer science to quickly integrate ScenaConnect in several and various experiences thanks to the software ScenaProd, which is, like ScenaConnect, a goal of the PRIM project presented in more detail on this paper.
{"title":"ScenaConnect: an original device to enhance experiences with multisensoriality","authors":"Justin Debloos, C. Jost, D. Archambault","doi":"10.1145/3573381.3597225","DOIUrl":"https://doi.org/10.1145/3573381.3597225","url":null,"abstract":"This demonstration aims at presenting ScenaConnect, a multisensory device which allows people to live various several multisensory experiences. ScenaConnect is inexpensive, compact, easy to install and allows to improve experiences in added new interactions. The demonstration will present two cases of use. The first one is an interactive math exercise and the second one is a multisensory experience that will take the visitor on a journey through history. Moreover, ScenaConnect could be used in museums for immersive and interactive experiences or by a teacher who can use it to make the learning of his students more interactive and adapted. The perspectives are to allows non-expert in computer science to quickly integrate ScenaConnect in several and various experiences thanks to the software ScenaProd, which is, like ScenaConnect, a goal of the PRIM project presented in more detail on this paper.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132112239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreina Nunez Morales, Eleuda Nuñez, Masakazu Hirokawa, L. Imbesi, Ioannis Chatzigiannakis
International migration forces people into an unfamiliar reality in which their customs and values lose relevance. Moreover, former relationships are left behind, which makes immigrants more likely to experience loneliness. This study focuses particularly on Venezuelan immigrants by incorporating cultural aspects into a solution aimed at reducing loneliness and increasing social connectedness. Among Venezuelans, coffee is a staple of their daily routine and their favorite social beverage. We propose KEPEIN, a coffee maker-shaped interface to transfer a sense of presence and share coffee over distance. Through an experimental study, we evaluated the user’s perception and reaction when communicating through the interface. The results show potential added value to communication by including KEPEIN in a traditional remote interaction scenario. We discuss the benefits and limitations of this type of tangible communication interface and the importance of incorporating culture into the design of solutions for immigrants.
{"title":"A Social Awareness Interface for Helping Immigrants Maintain Connections to Their Families and Cultural Roots: The Case of Venezuelan Immigrants","authors":"Andreina Nunez Morales, Eleuda Nuñez, Masakazu Hirokawa, L. Imbesi, Ioannis Chatzigiannakis","doi":"10.1145/3573381.3596461","DOIUrl":"https://doi.org/10.1145/3573381.3596461","url":null,"abstract":"International migration forces people into an unfamiliar reality in which their customs and values lose relevance. Moreover, former relationships are left behind, which makes immigrants more likely to experience loneliness. This study focuses particularly on Venezuelan immigrants by incorporating cultural aspects into a solution aimed at reducing loneliness and increasing social connectedness. Among Venezuelans, coffee is a staple of their daily routine and their favorite social beverage. We propose KEPEIN, a coffee maker-shaped interface to transfer a sense of presence and share coffee over distance. Through an experimental study, we evaluated the user’s perception and reaction when communicating through the interface. The results show potential added value to communication by including KEPEIN in a traditional remote interaction scenario. We discuss the benefits and limitations of this type of tangible communication interface and the importance of incorporating culture into the design of solutions for immigrants.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131218765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}