Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.
Mitiche et al.(2015)研究了场景流和深度从单个图像序列(而不是像其他人要求的那样从立体序列)中进行变分联合恢复,使用了包含场景流和深度与图像序列时空变化一致性项的积分泛函,以及用于平滑深度场和场景流的L2正则化项。所得到的方案与Horn和Schunck光流估计方法类似,只是未知因素是深度和场景流而不是光流。给出了几个例子来显示该方法的基本效力:它能够恢复良好的深度和运动,除了在它们的边界,因为L2正则化对它不加选择地平滑的不连续是盲目的。我们在本文中研究的方法将Mitiche等人(2015)的公式推广到L1正则化,以便计算深度和场景流的边界保持估计。作为函数中数据的图像导数也通过变分方法从记录的图像序列中计算,该变分方法使用L1正则化来保持它们的不连续性。虽然L1正则化为目标泛函的最小化产生非线性欧拉-拉格朗日方程,但这些方程可以有效地求解。在真实图像和合成图像的实验中,证明了这种泛化的优点,即更清晰的计算深度和三维运动,这些实验显示了L1与L2正则化深度和运动的结果,以及使用L1而不是L2正则化图像导数的结果。
{"title":"Monocular, Boundary-Preserving Joint Recovery of Scene Flow and Depth","authors":"Y. Mathlouthi, A. Mitiche, Ismail Ben Ayed","doi":"10.3389/fict.2016.00021","DOIUrl":"https://doi.org/10.3389/fict.2016.00021","url":null,"abstract":"Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"68 1","pages":"21"},"PeriodicalIF":0.0,"publicationDate":"2016-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85386813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernando Garcia-Sanjuan, J. Martínez, Vicente Nacher
Combining multiple displays in the same environment enables more immersive and rich experiences in which visualization and interaction can be improved. Although much research has been done in the field of Multi-Display Environments (MDEs) and previous studies have provided taxonomies to define them, these have usually consisted of partial descriptions. In this paper we propose a general taxonomy that identifies the key dimensions to tackle when developing MDEs and a classification of previous studies, with the aim of helping designers to identify the key aspects that must be addressed when developing the next generation of MDEs.
{"title":"Toward a General Conceptualization of Multi-Display Environments","authors":"Fernando Garcia-Sanjuan, J. Martínez, Vicente Nacher","doi":"10.3389/fict.2016.00020","DOIUrl":"https://doi.org/10.3389/fict.2016.00020","url":null,"abstract":"Combining multiple displays in the same environment enables more immersive and rich experiences in which visualization and interaction can be improved. Although much research has been done in the field of Multi-Display Environments (MDEs) and previous studies have provided taxonomies to define them, these have usually consisted of partial descriptions. In this paper we propose a general taxonomy that identifies the key dimensions to tackle when developing MDEs and a classification of previous studies, with the aim of helping designers to identify the key aspects that must be addressed when developing the next generation of MDEs.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"5 1","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2016-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76076659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-stoquastic Hamiltonians have both positive and negative signs in off-diagonal elements in their matrix representation in the standard computational basis and thus cannot be simulated efficiently by the standard quantum Monte Carlo method due to the sign problem. We describe our analytical studies of this type of Hamiltonians with infinite-range non-random as well as random interactions from the perspective of possible enhancement of the efficiency of quantum annealing or adiabatic quantum computing. It is shown that multi-body transverse interactions like $XX$ and $XXXXX$ with positive coefficients appended to a stoquastic transverse-field Ising model render the Hamiltonian non-stoquastic and reduce a first-order quantum phase transition in the simple transverse-field case to a second-order transition. This implies that the efficiency of quantum annealing is exponentially enhanced, because a first-order transition has an exponentially small energy gap (and therefore exponentially long computation time) whereas a second-order transition has a polynomially decaying gap (polynomial computation time). The examples presented here represent rare instances where strong quantum effects, in the sense that they cannot be efficiently simulated in the standard quantum Monte Carlo, have analytically been shown to exponentially enhance the efficiency of quantum annealing for combinatorial optimization problems.
{"title":"Exponential Enhancement of the Efficiency of Quantum Annealing by Non-Stoquastic Hamiltonians","authors":"H. Nishimori, K. Takada","doi":"10.3389/fict.2017.00002","DOIUrl":"https://doi.org/10.3389/fict.2017.00002","url":null,"abstract":"Non-stoquastic Hamiltonians have both positive and negative signs in off-diagonal elements in their matrix representation in the standard computational basis and thus cannot be simulated efficiently by the standard quantum Monte Carlo method due to the sign problem. We describe our analytical studies of this type of Hamiltonians with infinite-range non-random as well as random interactions from the perspective of possible enhancement of the efficiency of quantum annealing or adiabatic quantum computing. It is shown that multi-body transverse interactions like $XX$ and $XXXXX$ with positive coefficients appended to a stoquastic transverse-field Ising model render the Hamiltonian non-stoquastic and reduce a first-order quantum phase transition in the simple transverse-field case to a second-order transition. This implies that the efficiency of quantum annealing is exponentially enhanced, because a first-order transition has an exponentially small energy gap (and therefore exponentially long computation time) whereas a second-order transition has a polynomially decaying gap (polynomial computation time). The examples presented here represent rare instances where strong quantum effects, in the sense that they cannot be efficiently simulated in the standard quantum Monte Carlo, have analytically been shown to exponentially enhance the efficiency of quantum annealing for combinatorial optimization problems.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"333 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2016-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76299243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filomena Scibelli, A. Troncone, Laurence Likforman-Sulem, A. Vinciarelli, A. Esposito
Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions) which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by Recurrent Major Depressive Disorder (RMDDs) using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with Adjustment Disorder with Depressed Mood (ADs) and healthy (HCs) subjects. The experiments involve 27 RMDDs (16 with acute depression - RMDD-A, and 11 in a compensation phase - RMDD-C), 16 ADs and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video), video (without audio) and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy.
{"title":"How Major Depressive Disorder Affects the Ability to Decode Multimodal Dynamic Emotional Stimuli","authors":"Filomena Scibelli, A. Troncone, Laurence Likforman-Sulem, A. Vinciarelli, A. Esposito","doi":"10.3389/fict.2016.00016","DOIUrl":"https://doi.org/10.3389/fict.2016.00016","url":null,"abstract":"Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions) which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by Recurrent Major Depressive Disorder (RMDDs) using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with Adjustment Disorder with Depressed Mood (ADs) and healthy (HCs) subjects. The experiments involve 27 RMDDs (16 with acute depression - RMDD-A, and 11 in a compensation phase - RMDD-C), 16 ADs and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video), video (without audio) and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"37 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87719673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Obsessive-compulsive disorder (OCD) is characterized by the presence of unwanted and repetitive thoughts triggering significant anxiety, as well as the presence of ritual behaviours or mental acts carried out in response to obsessions to reduce the associated distress. In the contamination subtype, individuals are scared of germs and bacteria, are excessively concerned with cleaning, fear contamination and the spread of disease, and may have a very strong aversion to bodily secretions. A few studies on virtual reality have been conducted with people suffering from OCD, but they all focus on the subtype characterized by checking rituals. The goal of this study is to confirm the potential of a “contaminated” virtual environment in inducing anxiety in 12 adults suffering from contamination-subtype OCD compared to 20 adults without OCD (N = 32) using a within-between protocol. Subjective (questionnaire) and objective (heart rate) measurements were compiled. Participants were immersed in a control virtual environment (empty and clean room) and a “contaminated” virtual environment (filthy public restroom) designed for the treatment of OCD. Immersions were conducted in a 6-wall CAVE-like system. As hypothesized, the results of repeated-measures ANCOVAs revealed the significant impact of immersion in a filthy public restroom for participants suffering from OCD on both measures. Presence was correlated with anxiety in OCD participants and no difference in presence was observed between groups. Unwanted negative side effects induced by immersions in virtual reality were higher in the OCD group. The clinical implications of the results and directions for further studies are discussed.
{"title":"Inducing an Anxiety Response Using a Contaminated Virtual Environment: Validation of a Therapeutic Tool for Obsessive–Compulsive Disorder","authors":"M. Laforest, S. Bouchard, A. Crétu, Olivier Mesly","doi":"10.3389/fict.2016.00018","DOIUrl":"https://doi.org/10.3389/fict.2016.00018","url":null,"abstract":"Obsessive-compulsive disorder (OCD) is characterized by the presence of unwanted and repetitive thoughts triggering significant anxiety, as well as the presence of ritual behaviours or mental acts carried out in response to obsessions to reduce the associated distress. In the contamination subtype, individuals are scared of germs and bacteria, are excessively concerned with cleaning, fear contamination and the spread of disease, and may have a very strong aversion to bodily secretions. A few studies on virtual reality have been conducted with people suffering from OCD, but they all focus on the subtype characterized by checking rituals. The goal of this study is to confirm the potential of a “contaminated” virtual environment in inducing anxiety in 12 adults suffering from contamination-subtype OCD compared to 20 adults without OCD (N = 32) using a within-between protocol. Subjective (questionnaire) and objective (heart rate) measurements were compiled. Participants were immersed in a control virtual environment (empty and clean room) and a “contaminated” virtual environment (filthy public restroom) designed for the treatment of OCD. Immersions were conducted in a 6-wall CAVE-like system. As hypothesized, the results of repeated-measures ANCOVAs revealed the significant impact of immersion in a filthy public restroom for participants suffering from OCD on both measures. Presence was correlated with anxiety in OCD participants and no difference in presence was observed between groups. Unwanted negative side effects induced by immersions in virtual reality were higher in the OCD group. The clinical implications of the results and directions for further studies are discussed.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"32 1","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2016-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90186725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing number of datasets encountered in imaging studies, the increasing complexity of processing workflows, and a growing awareness for data stewardship, there is a need for managed, automated workflows. In this paper we introduce Fastr, an automated workflow engine with support for advanced data flows. Fastr has built-in data provenance for recording processing trails and ensuring reproducible results. The extensible plugin-based design allows the system to interface with virtually any image archive and processing infrastructure. This workflow engine is designed to consolidate quantitative imaging biomarker pipelines in order to enable easy application to new data.
{"title":"Fastr: A Workflow Engine for Advanced Data Flows in Medical Image Analysis","authors":"H. Achterberg, Marcel Koek, W. Niessen","doi":"10.3389/fict.2016.00015","DOIUrl":"https://doi.org/10.3389/fict.2016.00015","url":null,"abstract":"With the increasing number of datasets encountered in imaging studies, the increasing complexity of processing workflows, and a growing awareness for data stewardship, there is a need for managed, automated workflows. In this paper we introduce Fastr, an automated workflow engine with support for advanced data flows. Fastr has built-in data provenance for recording processing trails and ensuring reproducible results. The extensible plugin-based design allows the system to interface with virtually any image archive and processing infrastructure. This workflow engine is designed to consolidate quantitative imaging biomarker pipelines in order to enable easy application to new data.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"56 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2016-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75357797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew C. Robb, A. Kleinsmith, Andrew Cordar, C. White, A. Wendling, S. Lampotang, Benjamin C. Lok
Despite research showing that team training can lead to strong improvements in team performance, logistical difficulties can prevent team training programs from being adopted on a large scale. A proposed solution to these difficulties is the use of virtual humans to replace missing teammates. Existing research evaluating the use of virtual humans for team training has been conducted in settings involving a single human trainee. However, in the real world multiple human trainees would most likely train together. In this paper, we explore how the presence of a second human trainee can alter behavior during a medical team training program. Ninety-two nurses and surgical technicians participated in a medical training exercise, where they worked with a virtual surgeon and virtual anesthesiologist to prepare a simulated patient for surgery. The agency of the nurse and the surgical technician were varied between three conditions: human nurses and surgical technicians working together; human nurses working with a virtual surgical technician; and human surgical technicians working with a virtual nurse. Variations in agency did not produce statistically significant differences in the training outcomes, but several notable differences were observed in other aspects of the team's behavior. Specifically, when working with a virtual nurse, human surgical technicians were more likely to assist with speaking up about patient safety issues that were outside of their normal responsibilities; human trainees spent less time searching for a missing item when working with a virtual partner, likely because the virtual partner was physically unable to move throughout the room and assist with the searching process; and more breaks in presence were observed when two human teammates were present. These results show that some behaviors may be influenced by the presence of multiple human trainees, though these behaviors may not impinge on core training goals. When developing virtual human-based training programs, designers should consider that the presence of other humans may reduce involvement during training moments perceived to be the responsibility of other trainees, and should consider that a virtual teammate's limitations may cause human teammates to limit their own behaviors in corresponding ways (e.g. searching less).
{"title":"Training Together: How Another Human Trainee’s Presence Affects Behavior during Virtual Human-Based Team Training","authors":"Andrew C. Robb, A. Kleinsmith, Andrew Cordar, C. White, A. Wendling, S. Lampotang, Benjamin C. Lok","doi":"10.3389/fict.2016.00017","DOIUrl":"https://doi.org/10.3389/fict.2016.00017","url":null,"abstract":"Despite research showing that team training can lead to strong improvements in team performance, logistical difficulties can prevent team training programs from being adopted on a large scale. A proposed solution to these difficulties is the use of virtual humans to replace missing teammates. Existing research evaluating the use of virtual humans for team training has been conducted in settings involving a single human trainee. However, in the real world multiple human trainees would most likely train together. In this paper, we explore how the presence of a second human trainee can alter behavior during a medical team training program. Ninety-two nurses and surgical technicians participated in a medical training exercise, where they worked with a virtual surgeon and virtual anesthesiologist to prepare a simulated patient for surgery. The agency of the nurse and the surgical technician were varied between three conditions: human nurses and surgical technicians working together; human nurses working with a virtual surgical technician; and human surgical technicians working with a virtual nurse. Variations in agency did not produce statistically significant differences in the training outcomes, but several notable differences were observed in other aspects of the team's behavior. Specifically, when working with a virtual nurse, human surgical technicians were more likely to assist with speaking up about patient safety issues that were outside of their normal responsibilities; human trainees spent less time searching for a missing item when working with a virtual partner, likely because the virtual partner was physically unable to move throughout the room and assist with the searching process; and more breaks in presence were observed when two human teammates were present. These results show that some behaviors may be influenced by the presence of multiple human trainees, though these behaviors may not impinge on core training goals. When developing virtual human-based training programs, designers should consider that the presence of other humans may reduce involvement during training moments perceived to be the responsibility of other trainees, and should consider that a virtual teammate's limitations may cause human teammates to limit their own behaviors in corresponding ways (e.g. searching less).","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"1 1","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88301513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents design guidelines and recommendations for developing cursor manipulation interaction devices to be employed in a wearable context. The work presented in this paper is the culmination three usability studies designed to understand commercially available pointing (cursor manipulation) devices suitable for use in a wearable context. The set of guidelines and recommendations presented are grounded on experimental and qualitative evidence derived from three usability studies and are intended to be used in order to inform the design of future wearable input devices. In addition to guiding the design process, the guidelines and recommendations may also be used to inform users of wearable computing devices by guiding towards the selection of a suitable wearable input device. The synthesis of results derived from a series of usability studies provide insights pertaining to the choice and usability of the devices in a wearable context. That is, the guidelines form a checklist that may be utilized as a point of comparison when choosing between the different input devices available for wearable interaction.
{"title":"Design Guidelines for Wearable Pointing Devices","authors":"J. Zucco, B. Thomas","doi":"10.3389/fict.2016.00013","DOIUrl":"https://doi.org/10.3389/fict.2016.00013","url":null,"abstract":"This paper presents design guidelines and recommendations for developing cursor manipulation interaction devices to be employed in a wearable context. The work presented in this paper is the culmination three usability studies designed to understand commercially available pointing (cursor manipulation) devices suitable for use in a wearable context. The set of guidelines and recommendations presented are grounded on experimental and qualitative evidence derived from three usability studies and are intended to be used in order to inform the design of future wearable input devices. In addition to guiding the design process, the guidelines and recommendations may also be used to inform users of wearable computing devices by guiding towards the selection of a suitable wearable input device. The synthesis of results derived from a series of usability studies provide insights pertaining to the choice and usability of the devices in a wearable context. That is, the guidelines form a checklist that may be utilized as a point of comparison when choosing between the different input devices available for wearable interaction.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"49 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2016-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74593140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our purpose is to focus attention on a new criterion for quantum schemes by bringing together the notions of quantum game and game isomorphism. A quantum game scheme is required to generate the classical game as a special case. Now, given a quantum game scheme and two isomorphic classical games, we additionally require the resulting quantum games to be isomorphic as well. We show how this isomorphism condition influences the players’ strategy sets. We are concerned with the Marinatto-Weber type quantum game scheme and the strong isomorphism between games in strategic form.
{"title":"Strong Isomorphism in Marinatto–Weber Type Quantum Games","authors":"Piotr Frąckiewicz","doi":"10.3389/fict.2016.00012","DOIUrl":"https://doi.org/10.3389/fict.2016.00012","url":null,"abstract":"Our purpose is to focus attention on a new criterion for quantum schemes by bringing together the notions of quantum game and game isomorphism. A quantum game scheme is required to generate the classical game as a special case. Now, given a quantum game scheme and two isomorphic classical games, we additionally require the resulting quantum games to be isomorphic as well. We show how this isomorphism condition influences the players’ strategy sets. We are concerned with the Marinatto-Weber type quantum game scheme and the strong isomorphism between games in strategic form.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"126 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77607916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The nonverbal communication of clinicians has an impact on patients’ satisfaction and health outcomes. Yet medical students are not receiving enough training on the appropriate nonverbal behaviors in clinical consultations. Computer vision techniques have been used for detecting different kinds of nonverbal behaviors, and they can be incorporated in educational systems that help medical students develop communication skills. We describe EQClinic, a system that combines a tele-health platform with automated nonverbal behavior recognition. The system aims to help medical students improve their communication skills through a combination of human and automatically generated feedback. EQClinic provides fully automated calendaring and video-conferencing features for doctors or medical students to interview patients. We describe a pilot (18 dyadic interactions) in which standardized patients (i.e. someone acting as a real patient), were interviewed by medical students and provided assessments and comments about their performance. After the interview, computer vision and audio processing algorithms were used to recognize students’ nonverbal behaviors known to influence the quality of a medical consultation: including turn taking, speaking ratio, sound volume, sound pitch, smiling, frowning, head leaning, head tilting, nodding, shaking, face-touch gestures and overall body movements. The results showed that students’ awareness of nonverbal communication was enhanced by the feedback information, which was both provided by the standardized patients and generated by the machines.
{"title":"Improving Medical Students’ Awareness of Their Non-Verbal Communication through Automated Non-Verbal Behavior Feedback","authors":"Chunfeng Liu, R. Calvo, Renee L Lim","doi":"10.3389/fict.2016.00011","DOIUrl":"https://doi.org/10.3389/fict.2016.00011","url":null,"abstract":"The nonverbal communication of clinicians has an impact on patients’ satisfaction and health outcomes. Yet medical students are not receiving enough training on the appropriate nonverbal behaviors in clinical consultations. Computer vision techniques have been used for detecting different kinds of nonverbal behaviors, and they can be incorporated in educational systems that help medical students develop communication skills. We describe EQClinic, a system that combines a tele-health platform with automated nonverbal behavior recognition. The system aims to help medical students improve their communication skills through a combination of human and automatically generated feedback. EQClinic provides fully automated calendaring and video-conferencing features for doctors or medical students to interview patients. We describe a pilot (18 dyadic interactions) in which standardized patients (i.e. someone acting as a real patient), were interviewed by medical students and provided assessments and comments about their performance. After the interview, computer vision and audio processing algorithms were used to recognize students’ nonverbal behaviors known to influence the quality of a medical consultation: including turn taking, speaking ratio, sound volume, sound pitch, smiling, frowning, head leaning, head tilting, nodding, shaking, face-touch gestures and overall body movements. The results showed that students’ awareness of nonverbal communication was enhanced by the feedback information, which was both provided by the standardized patients and generated by the machines.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"2 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73011604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}