Jesse Smith, Isaac Wang, Winston Wei, Julia Woodward, J. Ruiz
Mode switching allows applications to support a wide range of operations (e.g. selection, manipulation, and navigation) using a limited input space. While the performance of different mode switching techniques has been extensively examined for pen- and touch-based interfaces, investigating mode switching in augmented reality (AR) is still relatively new. Prior work found that using non-preferred hand is an efficient mode switching technique in AR. However, it is unclear how the technique performs when increasing the number of modes, which is more indicative of real-world applications. Therefore, we examined the scalability of non-preferred hand mode switching in AR with two, four, six, and eight modes. We found that as the number of modes increase, performance plateaus after the four-mode condition. We also found that counting gestures have varying effects on mode switching performance in AR. Our findings suggest that modeling mode switching performance in AR is more complex than simply counting the number of available modes. Our work lays a foundation for understanding the costs associated with scaling interaction techniques in AR.
{"title":"Evaluating the Scalability of Non-Preferred Hand Mode Switching in Augmented Reality","authors":"Jesse Smith, Isaac Wang, Winston Wei, Julia Woodward, J. Ruiz","doi":"10.1145/3399715.3399850","DOIUrl":"https://doi.org/10.1145/3399715.3399850","url":null,"abstract":"Mode switching allows applications to support a wide range of operations (e.g. selection, manipulation, and navigation) using a limited input space. While the performance of different mode switching techniques has been extensively examined for pen- and touch-based interfaces, investigating mode switching in augmented reality (AR) is still relatively new. Prior work found that using non-preferred hand is an efficient mode switching technique in AR. However, it is unclear how the technique performs when increasing the number of modes, which is more indicative of real-world applications. Therefore, we examined the scalability of non-preferred hand mode switching in AR with two, four, six, and eight modes. We found that as the number of modes increase, performance plateaus after the four-mode condition. We also found that counting gestures have varying effects on mode switching performance in AR. Our findings suggest that modeling mode switching performance in AR is more complex than simply counting the number of available modes. Our work lays a foundation for understanding the costs associated with scaling interaction techniques in AR.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117240182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasmeen Abdrabou, Yomna Abdelrahman, A. Ayman, Amr Elmougy, Mohamed Khamis
Recent work showed that using image processing techniques on thermal images taken by high-end equipment reveals passwords entered on touchscreens and keyboards. In this paper, we investigate the susceptibility of common touch inputs to thermal attacks when non-expert attackers visually inspect thermal images. Using an off-the-shelf thermal camera, we collected thermal images of a smartphone's touchscreen and a laptop's touchpad after 25 participants had entered passwords using touch gestures and touch taps. We show that visual inspection of thermal images by 18 participants reveals the majority of passwords. Touch gestures are more vulnerable to thermal attacks (60.65% successful attacks) than touch taps (23.61%), and attacks against touchscreens are more accurate than on touchpads (87.04% vs 56.02%). We discuss how the affordability of thermal attacks and the nature of touch interactions make the threat ubiquitous, and the implications this has on security.
{"title":"Are Thermal Attacks Ubiquitous?: When Non-Expert Attackers Use Off the shelf Thermal Cameras","authors":"Yasmeen Abdrabou, Yomna Abdelrahman, A. Ayman, Amr Elmougy, Mohamed Khamis","doi":"10.1145/3399715.3399819","DOIUrl":"https://doi.org/10.1145/3399715.3399819","url":null,"abstract":"Recent work showed that using image processing techniques on thermal images taken by high-end equipment reveals passwords entered on touchscreens and keyboards. In this paper, we investigate the susceptibility of common touch inputs to thermal attacks when non-expert attackers visually inspect thermal images. Using an off-the-shelf thermal camera, we collected thermal images of a smartphone's touchscreen and a laptop's touchpad after 25 participants had entered passwords using touch gestures and touch taps. We show that visual inspection of thermal images by 18 participants reveals the majority of passwords. Touch gestures are more vulnerable to thermal attacks (60.65% successful attacks) than touch taps (23.61%), and attacks against touchscreens are more accurate than on touchpads (87.04% vs 56.02%). We discuss how the affordability of thermal attacks and the nature of touch interactions make the threat ubiquitous, and the implications this has on security.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121127890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liwenhan Xie, J. O'Donnell, Benjamin Bach, Jean-Daniel Fekete
We present MeasureFlow, an interface to visually and interactively explore dynamic networks through time-series of network measures such as link number, graph density, or node activation. When networks contain many time steps, become large and more dense, or contain high frequencies of change, traditional visualizations that focus on network topology, such as animations or small multiples, fail to provide adequate overviews and thus fail to guide the analyst towards interesting time points and periods. MeasureFlow presents a complementary approach that relies on visualizing time-series of common network measures to provide a detailed yet comprehensive overview of when changes are happening and which network measures they involve. As dynamic networks undergo changes of varying rates and characteristics, network measures provide important hints on the pace and nature of their evolution and can guide an analysts in their exploration; based on a set of interactive and signal-processing methods, MeasureFlow allows an analyst to select and navigate periods of interest in the network. We demonstrate MeasureFlow through case studies with real-world data.
{"title":"Interactive Time-Series of Measures for Exploring Dynamic Networks","authors":"Liwenhan Xie, J. O'Donnell, Benjamin Bach, Jean-Daniel Fekete","doi":"10.1145/3399715.3399922","DOIUrl":"https://doi.org/10.1145/3399715.3399922","url":null,"abstract":"We present MeasureFlow, an interface to visually and interactively explore dynamic networks through time-series of network measures such as link number, graph density, or node activation. When networks contain many time steps, become large and more dense, or contain high frequencies of change, traditional visualizations that focus on network topology, such as animations or small multiples, fail to provide adequate overviews and thus fail to guide the analyst towards interesting time points and periods. MeasureFlow presents a complementary approach that relies on visualizing time-series of common network measures to provide a detailed yet comprehensive overview of when changes are happening and which network measures they involve. As dynamic networks undergo changes of varying rates and characteristics, network measures provide important hints on the pace and nature of their evolution and can guide an analysts in their exploration; based on a set of interactive and signal-processing methods, MeasureFlow allows an analyst to select and navigate periods of interest in the network. We demonstrate MeasureFlow through case studies with real-world data.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126461061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Angelini, G. Blasilli, S. Lenti, A. Palleschi, G. Santucci
Filtering is one of the basic interaction techniques in Information Visualization, with the main objective of limiting the amount of displayed information using constraints on attribute values. Research focused on direct manipulation selection means or on simple interactors like sliders or check-boxes: while the interaction with a single attribute is, in principle, straightforward, getting an understanding of the relationship between multiple attribute constraints and the actual selection might be a complex task. To cope with this problem, usually referred as cross-filtering, the paper provides a general definition of the structure of a filter, based on domain values and data distribution, the identification of visual feedbacks on the relationship between filters status and the current selection, and guidance means to help in fulfilling the requested selection. Then, leveraging on the definition of these design elements, the paper proposes CrossWidgets, modular attribute selectors that provide the user with feedback and guidance during complex interaction with multiple attributes. An initial controlled experiment demonstrates the benefits that CrossWidgets provide to cross-filtering activities.
{"title":"CrossWidgets","authors":"M. Angelini, G. Blasilli, S. Lenti, A. Palleschi, G. Santucci","doi":"10.1145/3399715.3399918","DOIUrl":"https://doi.org/10.1145/3399715.3399918","url":null,"abstract":"Filtering is one of the basic interaction techniques in Information Visualization, with the main objective of limiting the amount of displayed information using constraints on attribute values. Research focused on direct manipulation selection means or on simple interactors like sliders or check-boxes: while the interaction with a single attribute is, in principle, straightforward, getting an understanding of the relationship between multiple attribute constraints and the actual selection might be a complex task. To cope with this problem, usually referred as cross-filtering, the paper provides a general definition of the structure of a filter, based on domain values and data distribution, the identification of visual feedbacks on the relationship between filters status and the current selection, and guidance means to help in fulfilling the requested selection. Then, leveraging on the definition of these design elements, the paper proposes CrossWidgets, modular attribute selectors that provide the user with feedback and guidance during complex interaction with multiple attributes. An initial controlled experiment demonstrates the benefits that CrossWidgets provide to cross-filtering activities.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124471688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. D. Gregorio, G. Nota, Marco Romano, M. Sebillo, G. Vitiello
In Industry 4.0, Human Machine Interfaces are largely used in order to increase the performances of production processes at the same time reducing the number of emergencies and accidents. In manufacturing, the most typical system used to monitor the production is the Andon. It is a graphical system exploited in plants to notify operators who deal with management, maintenance and production performance of the presence of a problem. Of course, the usability of such interfaces is essential to allow an operator to identify and react more effectively to potentially critical situations. Improving the usability of such interfaces is a big challenge due to the increasing complexity of the data that must be processed and understood quickly by operators. In this paper, we present a set of guidelines to help professional developers to design usable interfaces for monitoring industrial production in manufacturing. Such guidelines are based on usability principles and formalized by reviewing existing industrial interfaces. Using a realistic case study prepared with manufacturing experts, we propose an Andon interface that we developed to test the efficacy of these guidelines on a last generation touch-wall device.
{"title":"Designing usable interfaces for the Industry 4.0","authors":"M. D. Gregorio, G. Nota, Marco Romano, M. Sebillo, G. Vitiello","doi":"10.1145/3399715.3399861","DOIUrl":"https://doi.org/10.1145/3399715.3399861","url":null,"abstract":"In Industry 4.0, Human Machine Interfaces are largely used in order to increase the performances of production processes at the same time reducing the number of emergencies and accidents. In manufacturing, the most typical system used to monitor the production is the Andon. It is a graphical system exploited in plants to notify operators who deal with management, maintenance and production performance of the presence of a problem. Of course, the usability of such interfaces is essential to allow an operator to identify and react more effectively to potentially critical situations. Improving the usability of such interfaces is a big challenge due to the increasing complexity of the data that must be processed and understood quickly by operators. In this paper, we present a set of guidelines to help professional developers to design usable interfaces for monitoring industrial production in manufacturing. Such guidelines are based on usability principles and formalized by reviewing existing industrial interfaces. Using a realistic case study prepared with manufacturing experts, we propose an Andon interface that we developed to test the efficacy of these guidelines on a last generation touch-wall device.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129616119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. De Marsico, Emanuele Panizzi, Francesca Romana Mattei, A. Musolino, Manuel Prandini, Marzia Riso, D. Sforza
This work proposes BowlingVR, an advanced Virtual Reality (VR) multiplayer game that tackles two main goals: the first one is to provide a realistic User eXperience (UX) to the user, by reproducing the dynamics and physical context of a real bowling challenge; the second one is to allow a remote, distributed, socially satisfying gameplay, providing the user the illusion of the real presence of the remote players. The prototype was evaluated using a modified version of SUXES, a kind of user interview schema that was originally devised for multimedia applications and that has been modified in order to better compare the responses of different users and get a more reliable estimation of user appreciation.
{"title":"Virtual bowling: launch as you all were there!","authors":"M. De Marsico, Emanuele Panizzi, Francesca Romana Mattei, A. Musolino, Manuel Prandini, Marzia Riso, D. Sforza","doi":"10.1145/3399715.3399848","DOIUrl":"https://doi.org/10.1145/3399715.3399848","url":null,"abstract":"This work proposes BowlingVR, an advanced Virtual Reality (VR) multiplayer game that tackles two main goals: the first one is to provide a realistic User eXperience (UX) to the user, by reproducing the dynamics and physical context of a real bowling challenge; the second one is to allow a remote, distributed, socially satisfying gameplay, providing the user the illusion of the real presence of the remote players. The prototype was evaluated using a modified version of SUXES, a kind of user interview schema that was originally devised for multimedia applications and that has been modified in order to better compare the responses of different users and get a more reliable estimation of user appreciation.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131716512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federica Amato, M. D. Gregorio, Clara Monaco, M. Sebillo, G. Tortora, G. Vitiello
In this work, we illustrate an innovative treatment for patients affected by Behavioral Disorders, that relies on the use of Pepper humanoid robot. This new therapeutic methodology was created to support and make the therapist's work more attractive. Pepper is equipped with a tablet and two identical cameras. The tablet is used to let the patient interact with the application, while the cameras are used to capture their real-time emotions to understand the degree of attention and any difficulty that they may have. The interaction with the tablet takes place through some exercises in the form of games. The exercises performed by the subject are analyzed and combined with the data captured by the cameras. The combination of these data is processed to propose appropriate levels of therapeutic activities. This process leads to the digitization of the patient's healing path so that any improvement (or worsening) is monitored and causes Pepper to become a reliable and predictable technological intermediary for the child. The work has been developed in collaboration with a diagnostic and therapeutic center. Interacting with a humanoid robot, children exhibit a higher engagement, which can be explained, according to the psychologists, by the fact that a robot is emotionally less rich than human beings, and the patient feels less scared.
{"title":"The Therapeutic Use of Humanoid Robots for Behavioral Disorders","authors":"Federica Amato, M. D. Gregorio, Clara Monaco, M. Sebillo, G. Tortora, G. Vitiello","doi":"10.1145/3399715.3399960","DOIUrl":"https://doi.org/10.1145/3399715.3399960","url":null,"abstract":"In this work, we illustrate an innovative treatment for patients affected by Behavioral Disorders, that relies on the use of Pepper humanoid robot. This new therapeutic methodology was created to support and make the therapist's work more attractive. Pepper is equipped with a tablet and two identical cameras. The tablet is used to let the patient interact with the application, while the cameras are used to capture their real-time emotions to understand the degree of attention and any difficulty that they may have. The interaction with the tablet takes place through some exercises in the form of games. The exercises performed by the subject are analyzed and combined with the data captured by the cameras. The combination of these data is processed to propose appropriate levels of therapeutic activities. This process leads to the digitization of the patient's healing path so that any improvement (or worsening) is monitored and causes Pepper to become a reliable and predictable technological intermediary for the child. The work has been developed in collaboration with a diagnostic and therapeutic center. Interacting with a humanoid robot, children exhibit a higher engagement, which can be explained, according to the psychologists, by the fact that a robot is emotionally less rich than human beings, and the patient feels less scared.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125149755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Veronika Bogina, Julia Sheidin, T. Kuflik, S. Berkovsky
There is an increasing evidence that data visualization is an important and useful tool for quick understanding and filtering of large amounts of data. In this paper, we contribute to this body of work with a study that compares chord and ranked list for presentation of a temporal TV program genre similarity in next-program recommendations. We consider genre similarity based on the similarity of temporal viewing patterns. We discover that chord presentation allows users to see the whole picture and improves their ability to choose items beyond the ranked list of top similar items. We believe that similarity visualization may be useful for the provision of both the recommendations and their explanations to the end users.
{"title":"Visualizing Program Genres' Temporal-Based Similarity in Linear TV Recommendations","authors":"Veronika Bogina, Julia Sheidin, T. Kuflik, S. Berkovsky","doi":"10.1145/3399715.3399813","DOIUrl":"https://doi.org/10.1145/3399715.3399813","url":null,"abstract":"There is an increasing evidence that data visualization is an important and useful tool for quick understanding and filtering of large amounts of data. In this paper, we contribute to this body of work with a study that compares chord and ranked list for presentation of a temporal TV program genre similarity in next-program recommendations. We consider genre similarity based on the similarity of temporal viewing patterns. We discover that chord presentation allows users to see the whole picture and improves their ability to choose items beyond the ranked list of top similar items. We believe that similarity visualization may be useful for the provision of both the recommendations and their explanations to the end users.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123347345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentino Artizzu, Davide Fara, Riccardo Macis, L. D. Spano
Standard development libraries for Virtual and Mixed Reality support haptic feedback through low-level parameters, which do not guide developers in creating effective interactions. In this paper, we report some preliminary results on a simplified structure for the creation, assignment and execution of haptic feedback for standard controllers with the optional feature of synchronizing an haptic pattern to an auditory feedback. In addition, we present the results of a preliminary test investigating the users' ability in recognizing variations in intensity and/or duration of the stimulus, especially when the two dimensions are combined for encoding information.
{"title":"FeedBucket","authors":"Valentino Artizzu, Davide Fara, Riccardo Macis, L. D. Spano","doi":"10.1145/3399715.3399947","DOIUrl":"https://doi.org/10.1145/3399715.3399947","url":null,"abstract":"Standard development libraries for Virtual and Mixed Reality support haptic feedback through low-level parameters, which do not guide developers in creating effective interactions. In this paper, we report some preliminary results on a simplified structure for the creation, assignment and execution of haptic feedback for standard controllers with the optional feature of synchronizing an haptic pattern to an auditory feedback. In addition, we present the results of a preliminary test investigating the users' ability in recognizing variations in intensity and/or duration of the stimulus, especially when the two dimensions are combined for encoding information.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123721660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. F. Abate, Aniello Castiglione, Michele Nappi, Ignazio Passero
Recent advances in Machine Learning have unveiled interesting possibilities for real-time investigating about user characteristics and expressions like, but not limited to, age, sex, body posture, emotions and moods. These new opportunities lay the foundations for new HCI tools for interactive applications that adopt user emotions as a communication channel. This paper presents an Emotion Controlled User Experience that changes according to user feelings and emotions analysed at runtime. Aiming at obtaining a preliminary evaluation of the proposed ecosystem, a controlled experiment has been performed in an engineering and software development company, where 60 people have been involved as volunteers. The subjective evaluation has been based on a standard questionnaire commonly adopted for measuring user perceived sense of immersion in Virtual Environments. The results of the controlled experiment encourage further investigations strengthen by the analysis of objective performance measurements and user physiological parameters.
{"title":"DELEX","authors":"A. F. Abate, Aniello Castiglione, Michele Nappi, Ignazio Passero","doi":"10.1145/3399715.3399820","DOIUrl":"https://doi.org/10.1145/3399715.3399820","url":null,"abstract":"Recent advances in Machine Learning have unveiled interesting possibilities for real-time investigating about user characteristics and expressions like, but not limited to, age, sex, body posture, emotions and moods. These new opportunities lay the foundations for new HCI tools for interactive applications that adopt user emotions as a communication channel. This paper presents an Emotion Controlled User Experience that changes according to user feelings and emotions analysed at runtime. Aiming at obtaining a preliminary evaluation of the proposed ecosystem, a controlled experiment has been performed in an engineering and software development company, where 60 people have been involved as volunteers. The subjective evaluation has been based on a standard questionnaire commonly adopted for measuring user perceived sense of immersion in Virtual Environments. The results of the controlled experiment encourage further investigations strengthen by the analysis of objective performance measurements and user physiological parameters.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125430901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}