Abstract Smart glasses are increasingly commercialized and may replace or at least complement smartphones someday. Common smartphone features, such as notifications, should then also be available for smart glasses. However, notifications are of disruptive character given that even unimportant notifications frequently interrupt users performing a primary task. This often leads to distractions and performance degradation. Thus, we propose a concept for displaying notifications in the peripheral field of view of smart glasses and with different visualizations depending on the priority of the notification. We developed three icon-based notifications representing increasing priority: a transparent green icon continuously becoming more opaque (low priority), a yellow icon moving up and down (medium priority), and a red and yellow flashing icon (high priority). To evaluate the concept, we conducted a study with 24 participants who performed a primary task and should react to notifications at the same time using the Nreal Light smart glasses. The results showed that reaction times for the low-priority notification were significantly higher and it was ranked as the least distracting. The medium- and high-priority notifications did not show a clear difference in noticeability, distraction, or workload. We discuss implications of our results for the perception and visualization of notifications in the peripheral field of view of smart glasses and, more generally, for augmented reality applications.
{"title":"Evaluation of Priority-Dependent Notifications for Smart Glasses Based on Peripheral Visual Cues","authors":"Anja K. Faulhaber, Moritz Hoppe, Ludger Schmidt","doi":"10.1515/icom-2022-0022","DOIUrl":"https://doi.org/10.1515/icom-2022-0022","url":null,"abstract":"Abstract Smart glasses are increasingly commercialized and may replace or at least complement smartphones someday. Common smartphone features, such as notifications, should then also be available for smart glasses. However, notifications are of disruptive character given that even unimportant notifications frequently interrupt users performing a primary task. This often leads to distractions and performance degradation. Thus, we propose a concept for displaying notifications in the peripheral field of view of smart glasses and with different visualizations depending on the priority of the notification. We developed three icon-based notifications representing increasing priority: a transparent green icon continuously becoming more opaque (low priority), a yellow icon moving up and down (medium priority), and a red and yellow flashing icon (high priority). To evaluate the concept, we conducted a study with 24 participants who performed a primary task and should react to notifications at the same time using the Nreal Light smart glasses. The results showed that reaction times for the low-priority notification were significantly higher and it was ranked as the least distracting. The medium- and high-priority notifications did not show a clear difference in noticeability, distraction, or workload. We discuss implications of our results for the perception and visualization of notifications in the peripheral field of view of smart glasses and, more generally, for augmented reality applications.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"5 1","pages":"239 - 252"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79725188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Urban environments are often characterized by loud and annoying sounds. Noise-cancelling headphones can suppress negative influences and superimpose the acoustic environment with audio-augmented realities (AAR). So far, AAR exhibited limited interactivity, e. g., being influenced by the location of the listener. In this paper we explore the superimposition of synchronized, augmented footstep sounds in urban AAR environments with noise-cancelling headphones. In an online survey, participants rated different soundscapes and sound augmentations. This served as a basis for selecting and designing soundscapes and augmentations for a subsequent in-situ field study in an urban environment with 16 participants. We found that the synchronous footstep feedback of our application EnvironZen contributes to creating a relaxing and immersive soundscape. Furthermore, we found that slightly delaying footstep feedback can be used to slow down walking and that particular footstep sounds can serve as intuitive navigation cues.
{"title":"EnvironZen: Immersive Soundscapes via Augmented Footstep Sounds in Urban Areas","authors":"M. Schrapel, Janko Happe, M. Rohs","doi":"10.1515/icom-2022-0020","DOIUrl":"https://doi.org/10.1515/icom-2022-0020","url":null,"abstract":"Abstract Urban environments are often characterized by loud and annoying sounds. Noise-cancelling headphones can suppress negative influences and superimpose the acoustic environment with audio-augmented realities (AAR). So far, AAR exhibited limited interactivity, e. g., being influenced by the location of the listener. In this paper we explore the superimposition of synchronized, augmented footstep sounds in urban AAR environments with noise-cancelling headphones. In an online survey, participants rated different soundscapes and sound augmentations. This served as a basis for selecting and designing soundscapes and augmentations for a subsequent in-situ field study in an urban environment with 16 participants. We found that the synchronous footstep feedback of our application EnvironZen contributes to creating a relaxing and immersive soundscape. Furthermore, we found that slightly delaying footstep feedback can be used to slow down walking and that particular footstep sounds can serve as intuitive navigation cues.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"37 1","pages":"219 - 237"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75645269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christina Schneegass, Sophia Sigethy, Teodora Mitrevska, Malin Eiband, D. Buschek
Abstract Frequent repetition of vocabulary is essential for effective language learning. To increase exposure to learning content, this work explores the integration of vocabulary tasks into the smartphone authentication process. We present the design and initial user experience evaluation of twelve prototypes, which explored three learning tasks and four common authentication types. In a three-week within-subject field study, we compared the most promising concept as mobile language learning (MLL) applications to two baselines: We designed a novel (1) UnlockApp that presents a vocabulary task with each authentication event, nudging users towards short frequent learning session. We compare it with a (2) NotificationApp that displays vocabulary tasks in a push notification in the status bar, which is always visible but learning needs to be user-initiated, and a (3) StandardApp that requires users to start in-app learning actively. Our study is the first to directly compare these embedding concepts for MLL, showing that integrating vocabulary learning into everyday smartphone interactions via UnlockApp and NotificationApp increases the number of answers. However, users show individual subjective preferences. Based on our results, we discuss the trade-off between higher content exposure and disturbance, and the related challenges and opportunities of embedding learning seamlessly into everyday mobile interactions.
{"title":"UnlockLearning – Investigating the Integration of Vocabulary Learning Tasks into the Smartphone Authentication Process","authors":"Christina Schneegass, Sophia Sigethy, Teodora Mitrevska, Malin Eiband, D. Buschek","doi":"10.1515/icom-2021-0037","DOIUrl":"https://doi.org/10.1515/icom-2021-0037","url":null,"abstract":"Abstract Frequent repetition of vocabulary is essential for effective language learning. To increase exposure to learning content, this work explores the integration of vocabulary tasks into the smartphone authentication process. We present the design and initial user experience evaluation of twelve prototypes, which explored three learning tasks and four common authentication types. In a three-week within-subject field study, we compared the most promising concept as mobile language learning (MLL) applications to two baselines: We designed a novel (1) UnlockApp that presents a vocabulary task with each authentication event, nudging users towards short frequent learning session. We compare it with a (2) NotificationApp that displays vocabulary tasks in a push notification in the status bar, which is always visible but learning needs to be user-initiated, and a (3) StandardApp that requires users to start in-app learning actively. Our study is the first to directly compare these embedding concepts for MLL, showing that integrating vocabulary learning into everyday smartphone interactions via UnlockApp and NotificationApp increases the number of answers. However, users show individual subjective preferences. Based on our results, we discuss the trade-off between higher content exposure and disturbance, and the related challenges and opportunities of embedding learning seamlessly into everyday mobile interactions.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"38 1","pages":"157 - 174"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75832106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We discuss the design requirements of a software platform for constructing immersive environments through handmade spherical perspective drawings in a performative setting, with concurrent interactive live feed of the spherical drawing’s VR visualization. We investigate current best practices and available software in order to extract functionalities, requirements, improvements, possible integrations and future developments. We map the base requirements of the software from three sources: the state of the art of drawing techniques for spherical perspectives (equirectangular, azimuthal equidistant and cubical), the available software for their practice and the experimentation with novel hybrid artefacts. For the latter, we use a node-based program that allows us to prototype the workflow before entering a pure coding stage. The desired software platform should integrate well within digital art practices, stimulate and facilitate the practice of anamorphic handmade spherical drawings, and expand spherical perspectives’ applications through the emerging media of Hybrid Immersive Models (HIMs).
{"title":"Desiderata for a Performative Hybrid Immersive Drawing Platform","authors":"Lucas Fabián Olivero, A. Araújo","doi":"10.1515/icom-2022-0009","DOIUrl":"https://doi.org/10.1515/icom-2022-0009","url":null,"abstract":"Abstract We discuss the design requirements of a software platform for constructing immersive environments through handmade spherical perspective drawings in a performative setting, with concurrent interactive live feed of the spherical drawing’s VR visualization. We investigate current best practices and available software in order to extract functionalities, requirements, improvements, possible integrations and future developments. We map the base requirements of the software from three sources: the state of the art of drawing techniques for spherical perspectives (equirectangular, azimuthal equidistant and cubical), the available software for their practice and the experimentation with novel hybrid artefacts. For the latter, we use a node-based program that allows us to prototype the workflow before entering a pure coding stage. The desired software platform should integrate well within digital art practices, stimulate and facilitate the practice of anamorphic handmade spherical drawings, and expand spherical perspectives’ applications through the emerging media of Hybrid Immersive Models (HIMs).","PeriodicalId":37105,"journal":{"name":"i-com","volume":"28 1","pages":"33 - 53"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84419617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie Schauer, S. Bertocci, Federico Cioli, J. Sieck, N. Shakhovska, O. Vovk
Abstract This paper presents the progress implemented during the AURA project, funded by the Creative-Europe program with project partners from Germany, Italy, and Ukraine. The project aims to create auralized applications for three music venues in each of the project countries, namely the Konzerthaus Berlin, the Teatro del Maggio in Florence, and the Opera House Lviv. Each will be digitally recreated and auralized before they are then used to conduct case studies. This paper gives insights into current digitalization and auralization techniques. The results of a digital survey will be laid out and the conception and implementation of a first auralized prototype using a hand-modeled 3D object from the Great Hall of the Konzerthaus Berlin will be demonstrated. Furthermore, the usage of auralization for touristic purposes will be investigated using artificial intelligence for an audience preference analysis. A conclusion will be drawn and a short outlook into the ongoing course of the AURA project will be given.
{"title":"Auralization of Concert Halls for Touristic Purposes","authors":"Sophie Schauer, S. Bertocci, Federico Cioli, J. Sieck, N. Shakhovska, O. Vovk","doi":"10.1515/icom-2022-0008","DOIUrl":"https://doi.org/10.1515/icom-2022-0008","url":null,"abstract":"Abstract This paper presents the progress implemented during the AURA project, funded by the Creative-Europe program with project partners from Germany, Italy, and Ukraine. The project aims to create auralized applications for three music venues in each of the project countries, namely the Konzerthaus Berlin, the Teatro del Maggio in Florence, and the Opera House Lviv. Each will be digitally recreated and auralized before they are then used to conduct case studies. This paper gives insights into current digitalization and auralization techniques. The results of a digital survey will be laid out and the conception and implementation of a first auralized prototype using a hand-modeled 3D object from the Great Hall of the Konzerthaus Berlin will be demonstrated. Furthermore, the usage of auralization for touristic purposes will be investigated using artificial intelligence for an audience preference analysis. A conclusion will be drawn and a short outlook into the ongoing course of the AURA project will be given.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"256 1","pages":"95 - 107"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72975968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper presents a first prototype of a virtual Theremin instrument for accompanying film scenes with sound. The virtual Theremin is implemented as a hybrid application for the web. Sound control is achieved by capturing user gestures with a webcam and mapping the gestures to the corresponding virtual Theremin parameters pitch and volume. Different sound types can be selected. The application’s underlying research is part of the multi-modal digital heritage project KOLLISIONEN which targets to open up the private archive of the Russian film maker Sergej Eisenstein to a broader public in digital form. Eisenstein, a film theorist and pioneer of film montage, was particularly intrigued by the Theremin as an instrument for film sound design. The virtual Theremin presented here is therefore linked to a film scene from the 1929 Soviet drama “The General Line” by Sergej Eisenstein which was never set to music originally. In its first implementation state, the application connects music interaction design with digital heritage in a modular, flexible and playful way and uses contemporary web technologies to enable easy operation and the greatest possible accessibility.
{"title":"The Virtual Theremin: Designing an Interactive Digital Music Instrument for Film Scene Scoring","authors":"Bela Usabaev, Anna Eschenbacher, A. Brennecke","doi":"10.1515/icom-2022-0007","DOIUrl":"https://doi.org/10.1515/icom-2022-0007","url":null,"abstract":"Abstract This paper presents a first prototype of a virtual Theremin instrument for accompanying film scenes with sound. The virtual Theremin is implemented as a hybrid application for the web. Sound control is achieved by capturing user gestures with a webcam and mapping the gestures to the corresponding virtual Theremin parameters pitch and volume. Different sound types can be selected. The application’s underlying research is part of the multi-modal digital heritage project KOLLISIONEN which targets to open up the private archive of the Russian film maker Sergej Eisenstein to a broader public in digital form. Eisenstein, a film theorist and pioneer of film montage, was particularly intrigued by the Theremin as an instrument for film sound design. The virtual Theremin presented here is therefore linked to a film scene from the 1929 Soviet drama “The General Line” by Sergej Eisenstein which was never set to music originally. In its first implementation state, the application connects music interaction design with digital heritage in a modular, flexible and playful way and uses contemporary web technologies to enable easy operation and the greatest possible accessibility.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"4 3 1","pages":"109 - 121"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85346429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The prevalence of immersive head-mounted display (HMD) social virtual reality (VR) applications introduced asymmetric interaction among users within the virtual environment (VE). Therefore, researchers opted for (1) exploring the asymmetric social VR interaction dynamics in only co-located setups, (2) assigning interdependent roles to both HMD and non-HMD users, and (3) representing non-HMD users as abstract avatars in the VE. Therefore, we investigate the feasibility of supporting Self-Embodiment in an asymmetric VR interaction mode in a remote setup. To this end, we designed an asymmetric social VR game, QuarantivityVR, to (1) support sense of self-embodiment for non-HMD users in a remote setting by representing them as realistic full-body avatars within the VE, (2) augment visual-motor synchrony for the non-HMD users to increase their sense of agency and presence by detecting their motion through Kinect sensor and laptop’s webcam. During the game, each player performs three activities in succession, namely movie-guessing, spelling-bee, and answering mathematical questions. We believe that our work will act as a step towards the inclusion of a wide spectrum of users that can not afford full immersion and will aid researchers in creating enjoyable interactions for both users in the physical and virtual spaces.
{"title":"QuarantivityVR: Supporting Self-Embodiment for Non-HMD Users in Asymmetric Social VR Games","authors":"Amal Yassien, M. A. Soliman, Slim Abdennadher","doi":"10.1515/icom-2022-0005","DOIUrl":"https://doi.org/10.1515/icom-2022-0005","url":null,"abstract":"Abstract The prevalence of immersive head-mounted display (HMD) social virtual reality (VR) applications introduced asymmetric interaction among users within the virtual environment (VE). Therefore, researchers opted for (1) exploring the asymmetric social VR interaction dynamics in only co-located setups, (2) assigning interdependent roles to both HMD and non-HMD users, and (3) representing non-HMD users as abstract avatars in the VE. Therefore, we investigate the feasibility of supporting Self-Embodiment in an asymmetric VR interaction mode in a remote setup. To this end, we designed an asymmetric social VR game, QuarantivityVR, to (1) support sense of self-embodiment for non-HMD users in a remote setting by representing them as realistic full-body avatars within the VE, (2) augment visual-motor synchrony for the non-HMD users to increase their sense of agency and presence by detecting their motion through Kinect sensor and laptop’s webcam. During the game, each player performs three activities in succession, namely movie-guessing, spelling-bee, and answering mathematical questions. We believe that our work will act as a step towards the inclusion of a wide spectrum of users that can not afford full immersion and will aid researchers in creating enjoyable interactions for both users in the physical and virtual spaces.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"64 1","pages":"55 - 70"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90885332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Interaction becomes increasingly digital, including interactions with public authorities, requiring websites to be accessible for all. The strong focus on written words in digital interactions allows for assistive technology to improve access for many users. However, it might impede usability for users with reading and writing difficulties. The present paper examines whether guidelines such as the Web Content Accessibility Guidelines (WCAG) sufficiently cover users with dyslexia and how usability can be improved for this user group. This paper expands a previously published version at the Mensch und Computer 2021 conference [1]. Using literature research and interviews with users with dyslexia and focusing on an application of the WCAG on the country level (a German law regulating accessibility for e-government websites), we confirmed and identified gaps in the WCAG for this group. We focus on within-site search, as this function is frequently used to find relevant information, esp. on infrequently visited sites such as e-government websites. Modifications to improve search were developed based on literature and the results of the interviews. They were empirically evaluated in an online study with 31 users with dyslexia and 71 without. Results indicate that an auto-complete function, a search that compensates for spelling errors, an indicator that the search was corrected, search term summary information, and avoidance of capital letters were useful for both groups, while wider line spacing should only be used in end-user customization.
{"title":"Dyslexia and Accessibility Guidelines – How to Avoid Barriers to Access in Public Services","authors":"Ann-Kathrin Kennecke, Daniel Wessel, Moreen Heine","doi":"10.1515/icom-2021-0040","DOIUrl":"https://doi.org/10.1515/icom-2021-0040","url":null,"abstract":"Abstract Interaction becomes increasingly digital, including interactions with public authorities, requiring websites to be accessible for all. The strong focus on written words in digital interactions allows for assistive technology to improve access for many users. However, it might impede usability for users with reading and writing difficulties. The present paper examines whether guidelines such as the Web Content Accessibility Guidelines (WCAG) sufficiently cover users with dyslexia and how usability can be improved for this user group. This paper expands a previously published version at the Mensch und Computer 2021 conference [1]. Using literature research and interviews with users with dyslexia and focusing on an application of the WCAG on the country level (a German law regulating accessibility for e-government websites), we confirmed and identified gaps in the WCAG for this group. We focus on within-site search, as this function is frequently used to find relevant information, esp. on infrequently visited sites such as e-government websites. Modifications to improve search were developed based on literature and the results of the interviews. They were empirically evaluated in an online study with 31 users with dyslexia and 71 without. Results indicate that an auto-complete function, a search that compensates for spelling errors, an indicator that the search was corrected, search term summary information, and avoidance of capital letters were useful for both groups, while wider line spacing should only be used in end-user customization.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"118 1","pages":"139 - 155"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77432850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract While traditional videoconferencing causes privacy issues, virtual meetings are not yet widely used. Their communication quality still lacks usability and important non-verbal communication cues, such as body language, are underrepresented. We aim at exploring virtual avatars’ body language and how it can be used to indicate meeting attendees’ communication status. By comparing users’ perceptions of avatar behavior, we found that avatar body language across gender can be an indication of communication willingness. We derive resulting body language design recommendations and recommend using attentively behaving avatars as default body language and to indicate being busy through actions of the avatar, such as drinking, typing, or talking on a phone. These actions indicate that users are temporarily busy with another task, but still are attending the meeting. When users are unavailable, their avatars should not be displayed at all and in cases of longer meeting interruptions, the avatar of a user should leave the virtual meeting room.
{"title":"Body Language of Avatars in VR Meetings as Communication Status Cue: Recommendations for Interaction Design and Implementation","authors":"Marco Kurzweg, Katrin Wolf","doi":"10.1515/icom-2021-0038","DOIUrl":"https://doi.org/10.1515/icom-2021-0038","url":null,"abstract":"Abstract While traditional videoconferencing causes privacy issues, virtual meetings are not yet widely used. Their communication quality still lacks usability and important non-verbal communication cues, such as body language, are underrepresented. We aim at exploring virtual avatars’ body language and how it can be used to indicate meeting attendees’ communication status. By comparing users’ perceptions of avatar behavior, we found that avatar body language across gender can be an indication of communication willingness. We derive resulting body language design recommendations and recommend using attentively behaving avatars as default body language and to indicate being busy through actions of the avatar, such as drinking, typing, or talking on a phone. These actions indicate that users are temporarily busy with another task, but still are attending the meeting. When users are unavailable, their avatars should not be displayed at all and in cases of longer meeting interruptions, the avatar of a user should leave the virtual meeting room.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"64 1","pages":"175 - 201"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80588355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial from the New Editor-in-Chief","authors":"Michaela Koch","doi":"10.1515/icom-2022-0016","DOIUrl":"https://doi.org/10.1515/icom-2022-0016","url":null,"abstract":"","PeriodicalId":37105,"journal":{"name":"i-com","volume":"16 1","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75037703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}