Sebastian Feld, Christoph Roch, Thomas Gabor, Christian Seidel, F. Neukart, I. Galter, W. Mauerer, Claudia Linnhoff-Popien
The Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implemenation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.
{"title":"A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer","authors":"Sebastian Feld, Christoph Roch, Thomas Gabor, Christian Seidel, F. Neukart, I. Galter, W. Mauerer, Claudia Linnhoff-Popien","doi":"10.3389/fict.2019.00013","DOIUrl":"https://doi.org/10.3389/fict.2019.00013","url":null,"abstract":"The Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implemenation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"45 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2018-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88891373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Iakovakis, S. Hadjidimitriou, V. Charisis, S. Bostantjopoulou, Z. Katsarou, L. Klingelhöfer, H. Reichmann, S. Dias, J. Diniz, Dhaval Trivedi, K. Chaudhuri, L. Hadjileontiadis
Parkinson’s Disease (PD) is a neurodegenerative disorder with early non-motor/motor symptoms that may evade clinical detection for years after the disease onset due to their mildness and slow progression. Digital health tools that process densely sampled data streams from the daily human-mobile interaction can objectify the monitoring of behavioral patterns that change due to the appearance of early PD-related signs. In this context, touchscreens can capture micro-movements of fingers during natural typing; an unsupervised activity of high frequency that can reveal insights for users’ fine-motor handling and identify motor impairment. Subjects’ typing dynamics related to their fine-motor skills decline, unobtrusively captured from a mobile touchscreen, were recently explored in-the-clinic assessment to classify early PD patients and healthy controls. In this study, estimation of individual fine motor impairment severity scores is employed to interpret the footprint of specific underlying symptoms (such as brady-/hypokinesia (B/H-K) and rigidity (R)) to keystroke dynamics that cause group-wise variations. Regression models are employed for each fine-motor symptom, by exploiting features from keystroke dynamics sequences from in-the-clinic data captured from 18 early PD patients and 15 controls. Results show that R and B/H-K UPDRS Part III single items scores can be predicted with an accuracy of 78% and 70% respectively. The generalization power of these trained regressors derived from in-the-clinic data was further tested in a PD screening problem using data harvested in-the-wild for a longitudinal period of time (mean±std : 7±14 weeks) via a dedicated smartphone application for unobtrusive sensing of their routine smartphone typing. From a pool of 210 active users, data from 13 self-reported PD patients and 35 controls were selected based on demographics matching with the ones in-the-clinic setting. The results have shown that the estimated index achieve {0.84 (R),0.80 (B/H −K)} ROC AUC, respectively, with {sensitivity/speci ficity : 0.77/0.8 (R),0.92/0.63 (B/H −K)}, on classifying PD and controls in-the-wild setting. Apparently, the proposed approach constitutes a step forward to unobtrusive remote screening and detection of specific early PD signs from mobile-based human-computer interaction, introduces an interpretable methodology for the medical community and contributes to the continuous improvement of deployed tools and technologies in-the-wild.
{"title":"Motor Impairment Estimates via Touchscreen Typing Dynamics Toward Parkinson's Disease Detection From Data Harvested In-the-Wild","authors":"D. Iakovakis, S. Hadjidimitriou, V. Charisis, S. Bostantjopoulou, Z. Katsarou, L. Klingelhöfer, H. Reichmann, S. Dias, J. Diniz, Dhaval Trivedi, K. Chaudhuri, L. Hadjileontiadis","doi":"10.3389/fict.2018.00028","DOIUrl":"https://doi.org/10.3389/fict.2018.00028","url":null,"abstract":"Parkinson’s Disease (PD) is a neurodegenerative disorder with early non-motor/motor symptoms that may evade clinical detection for years after the disease onset due to their mildness and slow progression. Digital health tools that process densely sampled data streams from the daily human-mobile interaction can objectify the monitoring of behavioral patterns that change due to the appearance of early PD-related signs. In this context, touchscreens can capture micro-movements of fingers during natural typing; an unsupervised activity of high frequency that can reveal insights for users’ fine-motor handling and identify motor impairment. Subjects’ typing dynamics related to their fine-motor skills decline, unobtrusively captured from a mobile touchscreen, were recently explored in-the-clinic assessment to classify early PD patients and healthy controls. In this study, estimation of individual fine motor impairment severity scores is employed to interpret the footprint of specific underlying symptoms (such as brady-/hypokinesia (B/H-K) and rigidity (R)) to keystroke dynamics that cause group-wise variations. Regression models are employed for each fine-motor symptom, by exploiting features from keystroke dynamics sequences from in-the-clinic data captured from 18 early PD patients and 15 controls. Results show that R and B/H-K UPDRS Part III single items scores can be predicted with an accuracy of 78% and 70% respectively. The generalization power of these trained regressors derived from in-the-clinic data was further tested in a PD screening problem using data harvested in-the-wild for a longitudinal period of time (mean±std : 7±14 weeks) via a dedicated smartphone application for unobtrusive sensing of their routine smartphone typing. From a pool of 210 active users, data from 13 self-reported PD patients and 35 controls were selected based on demographics matching with the ones in-the-clinic setting. The results have shown that the estimated index achieve {0.84 (R),0.80 (B/H −K)} ROC AUC, respectively, with {sensitivity/speci ficity : 0.77/0.8 (R),0.92/0.63 (B/H −K)}, on classifying PD and controls in-the-wild setting. Apparently, the proposed approach constitutes a step forward to unobtrusive remote screening and detection of specific early PD signs from mobile-based human-computer interaction, introduces an interpretable methodology for the medical community and contributes to the continuous improvement of deployed tools and technologies in-the-wild.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"9 1","pages":"28"},"PeriodicalIF":0.0,"publicationDate":"2018-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82005341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media and collection of large volumes of multimedia data such as images, videos and the accompanying text is of prime importance in today’s society. This is stimulated by the power of the humans to communicate one with the others. A useful paradigm of exploitation of such a huge amount of multimedia volumes is the 3D reconstruction and modelling of sites, historical cultural cities/regions or objects of interest from the short videos captured by simple users mainly for personal or touristic purposes. The main challenge in this research is the unstructured nature of the videos and the fact that they contain many information which is not related with the object the 3D model we ask for but for personal usage such as humans in front of the objects, weather conditions, etc. In this article, we propose an automatic scheme for 3D modelling/reconstruction of objects of interest by collecting pools of short duration videos that have been captured mainly for touristic purposes. Initially a video summarization algorithm is introduced using a discriminant Principal Component Analysis (d-PCA). The goal of this innovative scheme is to extract the frames so that bunches within each video cluster that contains videos of content referring to the same object present the maximum coherency of image data while content across bunches the minimum one. Experimental results on cultural objects indicate the efficiency pf the proposed method to 3D reconstruct assets of interest using an unstructured image content information. □
{"title":"Automatic 3D Reconstruction From Unstructured Videos Combining Video Summarization and Structure From Motion","authors":"A. Doulamis","doi":"10.3389/fict.2018.00029","DOIUrl":"https://doi.org/10.3389/fict.2018.00029","url":null,"abstract":"Social media and collection of large volumes of multimedia data such as images, videos and the accompanying text is of prime importance in today’s society. This is stimulated by the power of the humans to communicate one with the others. A useful paradigm of exploitation of such a huge amount of multimedia volumes is the 3D reconstruction and modelling of sites, historical cultural cities/regions or objects of interest from the short videos captured by simple users mainly for personal or touristic purposes. The main challenge in this research is the unstructured nature of the videos and the fact that they contain many information which is not related with the object the 3D model we ask for but for personal usage such as humans in front of the objects, weather conditions, etc. In this article, we propose an automatic scheme for 3D modelling/reconstruction of objects of interest by collecting pools of short duration videos that have been captured mainly for touristic purposes. Initially a video summarization algorithm is introduced using a discriminant Principal Component Analysis (d-PCA). The goal of this innovative scheme is to extract the frames so that bunches within each video cluster that contains videos of content referring to the same object present the maximum coherency of image data while content across bunches the minimum one. Experimental results on cultural objects indicate the efficiency pf the proposed method to 3D reconstruct assets of interest using an unstructured image content information. □","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"101 1","pages":"29"},"PeriodicalIF":0.0,"publicationDate":"2018-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79368045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haojie Wu, D. Ashmead, Haley Adams, Bobby Bodenheimer
This work investigates how pedestrian street crossing behavior at a virtual traffic roundabout is affected by central visual field loss. We exposed participants with normal vision to a first-person virtual experience of central visual field loss of variable size in the form of a simulated scotoma, an area of the visual field with degraded visual acuity. A larger size of scotoma influenced people to select longer gaps between traffic, and to wait longer before initiating a crossing. In addition, a gender difference was found for risk taking behavior. Male subjects tended to take more risk, as indicated by the selection of shorter gaps in traffic and a shorter delay before the initiation of a crossing. Our findings generally replicate those of studies done in real-world conditions using participants afflicted with genuine central vision loss, supporting the hypothesis that virtual reality is a safe and accessible alternative for investigating similar issues of public concern.
{"title":"Using Virtual Reality to Assess the Street Crossing Behavior of Pedestrians With Simulated Macular Degeneration at a Roundabout","authors":"Haojie Wu, D. Ashmead, Haley Adams, Bobby Bodenheimer","doi":"10.3389/fict.2018.00027","DOIUrl":"https://doi.org/10.3389/fict.2018.00027","url":null,"abstract":"This work investigates how pedestrian street crossing behavior at a virtual traffic roundabout is affected by central visual field loss. We exposed participants with normal vision to a first-person virtual experience of central visual field loss of variable size in the form of a simulated scotoma, an area of the visual field with degraded visual acuity. A larger size of scotoma influenced people to select longer gaps between traffic, and to wait longer before initiating a crossing. In addition, a gender difference was found for risk taking behavior. Male subjects tended to take more risk, as indicated by the selection of shorter gaps in traffic and a shorter delay before the initiation of a crossing. Our findings generally replicate those of studies done in real-world conditions using participants afflicted with genuine central vision loss, supporting the hypothesis that virtual reality is a safe and accessible alternative for investigating similar issues of public concern.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"64 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86710439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the appropriation of digital musical instruments, wherein the performer develops a personal working relationship with an instrument that may differ from the designer's intent. Two studies are presented which explore different facets of appropriation. First, a highly restrictive instrument was designed to assess the effects of constraint on unexpected creative use. Second, a digital instrument was created which initially shared several constraints and interaction modalities with the first instrument, but which could be rewired by the performer to discover sounds not directly anticipated by the designers. Each instrument was studied with 10 musicians working individually to prepare public performances on the instrument. The results suggest that constrained musical interactions can promote the discovery of unusual and idiosyncratic playing techniques, and that tighter constraints may paradoxically lead to a richer performer experience. The diversity of ways in which the rewirable instrument was modified and used indicates that its design is open to interpretation by the performer, who may discover interaction modalities that were not anticipated by the designers.
{"title":"Hackable Instruments: Supporting Appropriation and Modification in Digital Musical Interaction","authors":"Victor Zappi, Andrew Mcpherson","doi":"10.3389/fict.2018.00026","DOIUrl":"https://doi.org/10.3389/fict.2018.00026","url":null,"abstract":"This paper investigates the appropriation of digital musical instruments, wherein the performer develops a personal working relationship with an instrument that may differ from the designer's intent. Two studies are presented which explore different facets of appropriation. First, a highly restrictive instrument was designed to assess the effects of constraint on unexpected creative use. Second, a digital instrument was created which initially shared several constraints and interaction modalities with the first instrument, but which could be rewired by the performer to discover sounds not directly anticipated by the designers. Each instrument was studied with 10 musicians working individually to prepare public performances on the instrument. The results suggest that constrained musical interactions can promote the discovery of unusual and idiosyncratic playing techniques, and that tighter constraints may paradoxically lead to a richer performer experience. The diversity of ways in which the rewirable instrument was modified and used indicates that its design is open to interpretation by the performer, who may discover interaction modalities that were not anticipated by the designers.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"268 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2018-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73365005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents Force Push, a novel gesture-based interaction technique for remote object manipulation in virtual reality (VR). Inspired by the design of magic powers in popular culture, Force Push uses intuitive hand gestures to drive physics-based movement of the object. Using a novel algorithm that dynamically maps rich features of hand gestures to the properties of the physics simulation, both coarse-grained ballistic movements and fine-grained refinement movements can be achieved seamlessly and naturally. An initial user study of a limited translation task showed that, although its gesture-to-force mapping is inherently harder to control than traditional position-to-position mappings, Force Push is usable even for extremely difficult tasks. Direct position-to-position control outperformed Force Push when the initial distance between the object and the target was close relative to the required accuracy; however, the gesture-based method began to show promising results when they were far away from each other. As for subjective user experience, Force Push was perceived as more natural and fun to use, even though its controllability and accuracy were thought to be inferior to direct control. This paper expands the design space of object manipulation beyond mimicking reality, and provides hints on using magical gestures and physics-based techniques for higher usability and hedonic qualities in user experience.
{"title":"Force Push: Exploring Expressive Gesture-to-Force Mappings for Remote Object Manipulation in Virtual Reality","authors":"Run Yu, D. Bowman","doi":"10.3389/fict.2018.00025","DOIUrl":"https://doi.org/10.3389/fict.2018.00025","url":null,"abstract":"This paper presents Force Push, a novel gesture-based interaction technique for remote object manipulation in virtual reality (VR). Inspired by the design of magic powers in popular culture, Force Push uses intuitive hand gestures to drive physics-based movement of the object. Using a novel algorithm that dynamically maps rich features of hand gestures to the properties of the physics simulation, both coarse-grained ballistic movements and fine-grained refinement movements can be achieved seamlessly and naturally. An initial user study of a limited translation task showed that, although its gesture-to-force mapping is inherently harder to control than traditional position-to-position mappings, Force Push is usable even for extremely difficult tasks. Direct position-to-position control outperformed Force Push when the initial distance between the object and the target was close relative to the required accuracy; however, the gesture-based method began to show promising results when they were far away from each other. As for subjective user experience, Force Push was perceived as more natural and fun to use, even though its controllability and accuracy were thought to be inferior to direct control. This paper expands the design space of object manipulation beyond mimicking reality, and provides hints on using magical gestures and physics-based techniques for higher usability and hedonic qualities in user experience.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"51 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2018-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86719390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyohei Tatsukawa, Hideyuki Takahashi, Y. Yoshikawa, H. Ishiguro
It is well known that artificial agents, such as robots, can potentially exert social influence and alter human behavior in certain situations. However, it is still unclear what specific psychological factors enhance the social influence of artificial agents on human decision-making. In this study, we performed an experiment to investigate what psychological factors predict the degree to which human partners versus artificial agents exert social influence on human beings. Participants were instructed to make a decision after a partner agent (human, computer, or android robot) answered the same question in a color perception task. Results indicated that the degree to which participants conformed to the partner agent positively correlated with their perceived interpersonal closeness toward the partner agent. Moreover, participants’ responses and accompanying error rates did not differ as a function of agent type. The current findings contribute to the design of artificial agents with high social influence.
{"title":"Interpersonal Closeness Correlates With Social Influence on Color Perception Task Using Human and Artificial Agents","authors":"Kyohei Tatsukawa, Hideyuki Takahashi, Y. Yoshikawa, H. Ishiguro","doi":"10.3389/fict.2018.00024","DOIUrl":"https://doi.org/10.3389/fict.2018.00024","url":null,"abstract":"It is well known that artificial agents, such as robots, can potentially exert social influence and alter human behavior in certain situations. However, it is still unclear what specific psychological factors enhance the social influence of artificial agents on human decision-making. In this study, we performed an experiment to investigate what psychological factors predict the degree to which human partners versus artificial agents exert social influence on human beings. Participants were instructed to make a decision after a partner agent (human, computer, or android robot) answered the same question in a color perception task. Results indicated that the degree to which participants conformed to the partner agent positively correlated with their perceived interpersonal closeness toward the partner agent. Moreover, participants’ responses and accompanying error rates did not differ as a function of agent type. The current findings contribute to the design of artificial agents with high social influence.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"10 1","pages":"24"},"PeriodicalIF":0.0,"publicationDate":"2018-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79939630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background Mainly due to an increase in stress-related health problems and driven by recent technological advances in biosensors, microelectronics, computing platform, and human-computer interaction, ubiquitous physiological information will potentially transform the role of biofeedback in clinical treatment. Such technology is also likely to provide a useful tool for stress management in everyday life. The aim of this systematic review is to: 1) Classify biofeedback systems for stress management, with a special focus on biosensing techniques, bio-data computing approaches, biofeedback protocol, and feedback modality. 2) Review ways of evaluating approaches to biofeedback applications in terms of their effectiveness in stress management. Method A systematic literature search was conducted using keywords for “Biofeedback” and “Stress” within the following databases: PubMed, IEEE Xplore, ACM, and Scopus. Two independent reviewers were involved in selecting articles. Results We identified 103 studies published between 1990 and 2016, 46 of which met our inclusion criteria and were further analyzed. Based on the evidence reviewed, HRV, multimodal biofeedback, RSP, HR, and GSR appear to be the most common techniques for alleviating stress. Traditional screen-based visual displays remain the most common devices used for biofeedback display. Biofeedback applications are usually assessed by making both physiological and psychological measurements. Conclusions This review reveals several challenges related to biofeedback for everyday stress management, such as the facilitating user’s perception and interpretating the biofeedback information, the demand of ubiquitous biosensing and display technologies, and field evaluation in order to understand the use of biofeedback in everyday environments. We expect that various emerging HCI technologies could be used to address these challenges. New interaction designs as well as biofeedback paradigms can be further explored in order to for improve the accessibility, usability, comfort, engagement with, and user experience of biofeedback in everyday use.
{"title":"Biofeedback for Everyday Stress Management: A Systematic Review","authors":"Bin Yu, M. Funk, Jun Hu, Qi Wang, L. Feijs","doi":"10.3389/fict.2018.00023","DOIUrl":"https://doi.org/10.3389/fict.2018.00023","url":null,"abstract":"Background Mainly due to an increase in stress-related health problems and driven by recent technological advances in biosensors, microelectronics, computing platform, and human-computer interaction, ubiquitous physiological information will potentially transform the role of biofeedback in clinical treatment. Such technology is also likely to provide a useful tool for stress management in everyday life. The aim of this systematic review is to: 1) Classify biofeedback systems for stress management, with a special focus on biosensing techniques, bio-data computing approaches, biofeedback protocol, and feedback modality. 2) Review ways of evaluating approaches to biofeedback applications in terms of their effectiveness in stress management. Method A systematic literature search was conducted using keywords for “Biofeedback” and “Stress” within the following databases: PubMed, IEEE Xplore, ACM, and Scopus. Two independent reviewers were involved in selecting articles. Results We identified 103 studies published between 1990 and 2016, 46 of which met our inclusion criteria and were further analyzed. Based on the evidence reviewed, HRV, multimodal biofeedback, RSP, HR, and GSR appear to be the most common techniques for alleviating stress. Traditional screen-based visual displays remain the most common devices used for biofeedback display. Biofeedback applications are usually assessed by making both physiological and psychological measurements. Conclusions This review reveals several challenges related to biofeedback for everyday stress management, such as the facilitating user’s perception and interpretating the biofeedback information, the demand of ubiquitous biosensing and display technologies, and field evaluation in order to understand the use of biofeedback in everyday environments. We expect that various emerging HCI technologies could be used to address these challenges. New interaction designs as well as biofeedback paradigms can be further explored in order to for improve the accessibility, usability, comfort, engagement with, and user experience of biofeedback in everyday use.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"77 1","pages":"23"},"PeriodicalIF":0.0,"publicationDate":"2018-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86172502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Thaler, Ivelina V. Piryankova, Jeanine K. Stefanucci, S. Pujades, S. Rosa, S. Streuber, J. Romero, Michael J. Black, B. Mohler
The creation or streaming of photo-realistic self-avatars is important for virtual reality applications that aim for perception and action to replicate real world experience. The appearance and recognition of a digital self-avatar may be especially important for applications related to telepresence, embodied virtual reality, or immersive games. We investigated gender differences in the use of visual cues (shape, texture) of a self-avatar for estimating body weight and evaluating avatar appearance. A full-body scanner was used to capture each participant's body geometry and color information and a set of 3D virtual avatars with realistic weight variations was created based on a statistical body model. Additionally, a second set of avatars was created with an average underlying body shape matched to each participant’s height and weight. In four sets of psychophysical experiments, the influence of visual cues on the accuracy of body weight estimation and the sensitivity to weight changes was assessed by manipulating body shape (own, average) and texture (own photo-realistic, checkerboard). The avatars were presented on a large-screen display, and participants responded to whether the avatar's weight corresponded to their own weight. Participants also adjusted the avatar's weight to their desired weight and evaluated the avatar's appearance with regard to similarity to their own body, uncanniness, and their willingness to accept it as a digital representation of the self. The results of the psychophysical experiments revealed no gender difference in the accuracy of estimating body weight in avatars. However, males accepted a larger weight range of the avatars as corresponding to their own. In terms of the ideal body weight, females but not males desired a thinner body. With regard to the evaluation of avatar appearance, the questionnaire responses suggest that own photo-realistic texture was more important to males for higher similarity ratings, while own body shape seemed to be more important to females. These results argue for gender-specific considerations when creating self-avatars.
{"title":"Visual Perception and Evaluation of Photo-Realistic Self-Avatars From 3D Body Scans in Males and Females","authors":"A. Thaler, Ivelina V. Piryankova, Jeanine K. Stefanucci, S. Pujades, S. Rosa, S. Streuber, J. Romero, Michael J. Black, B. Mohler","doi":"10.3389/fict.2018.00018","DOIUrl":"https://doi.org/10.3389/fict.2018.00018","url":null,"abstract":"The creation or streaming of photo-realistic self-avatars is important for virtual reality applications that aim for perception and action to replicate real world experience. The appearance and recognition of a digital self-avatar may be especially important for applications related to telepresence, embodied virtual reality, or immersive games. We investigated gender differences in the use of visual cues (shape, texture) of a self-avatar for estimating body weight and evaluating avatar appearance. A full-body scanner was used to capture each participant's body geometry and color information and a set of 3D virtual avatars with realistic weight variations was created based on a statistical body model. Additionally, a second set of avatars was created with an average underlying body shape matched to each participant’s height and weight. In four sets of psychophysical experiments, the influence of visual cues on the accuracy of body weight estimation and the sensitivity to weight changes was assessed by manipulating body shape (own, average) and texture (own photo-realistic, checkerboard). The avatars were presented on a large-screen display, and participants responded to whether the avatar's weight corresponded to their own weight. Participants also adjusted the avatar's weight to their desired weight and evaluated the avatar's appearance with regard to similarity to their own body, uncanniness, and their willingness to accept it as a digital representation of the self. The results of the psychophysical experiments revealed no gender difference in the accuracy of estimating body weight in avatars. However, males accepted a larger weight range of the avatars as corresponding to their own. In terms of the ideal body weight, females but not males desired a thinner body. With regard to the evaluation of avatar appearance, the questionnaire responses suggest that own photo-realistic texture was more important to males for higher similarity ratings, while own body shape seemed to be more important to females. These results argue for gender-specific considerations when creating self-avatars.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"570 ","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2018-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3389/fict.2018.00018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72494518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir E. Sarabadani Tafreshi, Andrea Soro, G. Tröster
Most public and semi-public displays show content that is not related to people passing by. As a result, most passersby completely ignore the displays. One solution to this problem is to give viewers the means to interact explicitly with such displays to convey their interests and thus receive content relevant to them. However, which method of interaction is most appropriate for gathering information on viewers' interests is still an open question. To identify methods appropriate for indicating topics of interest to public displays, we identified a range of dimensions to be considered when setting up public displays. We report a single-user and a multi-user study that use these dimensions to measure the effects of automatic, gestural, voice, positional, and cross-device interest indication methods. Our results enable us to establish guidelines for practitioners and researchers for selecting the most suitable interest indication method for a given scenario. Our results showed that cross-device and automatic methods strongly retain users' privacy. Gestural and positional methods were reported to be a fun experience. However, the gestural method performed better in the single-user study than in the multi-user study in all dimensions.
{"title":"Automatic, Gestural, Voice, Positional, or Cross-Device Interaction? Comparing Interaction Methods to Indicate Topics of Interest to Public Displays","authors":"Amir E. Sarabadani Tafreshi, Andrea Soro, G. Tröster","doi":"10.3389/fict.2018.00020","DOIUrl":"https://doi.org/10.3389/fict.2018.00020","url":null,"abstract":"Most public and semi-public displays show content that is not related to people passing by. As a result, most passersby completely ignore the displays. One solution to this problem is to give viewers the means to interact explicitly with such displays to convey their interests and thus receive content relevant to them. However, which method of interaction is most appropriate for gathering information on viewers' interests is still an open question. To identify methods appropriate for indicating topics of interest to public displays, we identified a range of dimensions to be considered when setting up public displays. We report a single-user and a multi-user study that use these dimensions to measure the effects of automatic, gestural, voice, positional, and cross-device interest indication methods. Our results enable us to establish guidelines for practitioners and researchers for selecting the most suitable interest indication method for a given scenario. Our results showed that cross-device and automatic methods strongly retain users' privacy. Gestural and positional methods were reported to be a fun experience. However, the gestural method performed better in the single-user study than in the multi-user study in all dimensions.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"90 1","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84305017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}