Pub Date : 2017-02-28DOI: 10.1007/s41133-017-0005-3
Emanuele Argento, George Papagiannakis, Eva Baka, Michail Maniadakis, Panos Trahanias, Michael Sfakianakis, Ioannis Nestoros
Building on augmented cognition theory and technology, our novel contribution in this work enables accelerated, certain brain functions related to task performance as well as their enhancement. We integrated in an open-source framework, latest immersive virtual reality (VR) head-mounted displays, with the Emotiv EPOC EEG headset in an open neuro- and biofeedback system for cognitive state detection and augmentation. Our novel methodology allows to significantly accelerate content presentation in immersive VR, while lowering brain frequency at alpha level—without losing content retention by the user. In our pilot experiments, we tested our innovative VR platform by presenting to N = 25 subjects a complex 3D maze test and different learning procedures for them on how to exit it. The subjects exposed to our VR-induced entrainment learning technology performed significantly better than those exposed to other “classical” learning procedures. In particular, cognitive task performance augmentation was measured for: learning time, complex navigational skills and decision-making abilities, orientation ability.
{"title":"Augmented Cognition via Brainwave Entrainment in Virtual Reality: An Open, Integrated Brain Augmentation in a Neuroscience System Approach","authors":"Emanuele Argento, George Papagiannakis, Eva Baka, Michail Maniadakis, Panos Trahanias, Michael Sfakianakis, Ioannis Nestoros","doi":"10.1007/s41133-017-0005-3","DOIUrl":"10.1007/s41133-017-0005-3","url":null,"abstract":"<div><p>Building on augmented cognition theory and technology, our novel contribution in this work enables accelerated, certain brain functions related to task performance as well as their enhancement. We integrated in an open-source framework, latest immersive virtual reality (VR) head-mounted displays, with the Emotiv EPOC EEG headset in an open neuro- and biofeedback system for cognitive state detection and augmentation. Our novel methodology allows to significantly accelerate content presentation in immersive VR, while lowering brain frequency at alpha level—without losing content retention by the user. In our pilot experiments, we tested our innovative VR platform by presenting to <i>N</i> = 25 subjects a complex 3D maze test and different learning procedures for them on how to exit it. The subjects exposed to our VR-induced entrainment learning technology performed significantly better than those exposed to other “classical” learning procedures. In particular, cognitive task performance augmentation was measured for: learning time, complex navigational skills and decision-making abilities, orientation ability.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-017-0005-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50051026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-23DOI: 10.1007/s41133-017-0006-2
Junichirou Ishio, Naoya Abe
Affective well-being indicates the changes in the predominance of positive or negative affects of people in response to their daily experiences. For measuring affective well-being, people are usually asked to indicate their affective states in several episodes that they experienced in a day. However, such conventional methods have been problematic in terms of their burdensomeness on participants and the validity of the rating. To overcome these problems, we attempted to introduce a new approach for measuring affective states, based on the combination of the day reconstruction method and the measurement of physiological stress levels by a wristband-type wearable device. As the indicator of physiological stress levels, we used heart rate variability calculated from the data recorded by the device. We examined the interpretability of the physiological stress level as a substitute for affective states by applying this combinational approach to an aging and depopulating village in Japan, because the well-being of the residents in such areas is a matter of public concern. As a result, we could depict the sources of affective well-being and the physiological stressors in the village. We also found a reasonable, but weak, correlation between the scores of affective states and the indicator of physiological stress levels. We discussed the challenges that should be overcome for utilizing the physiological stress level as a substitute for affective state.
{"title":"Measuring Affective Well-Being by the Combination of the Day Reconstruction Method and a Wearable Device: Case Study of an Aging and Depopulating Community in Japan","authors":"Junichirou Ishio, Naoya Abe","doi":"10.1007/s41133-017-0006-2","DOIUrl":"10.1007/s41133-017-0006-2","url":null,"abstract":"<div><p>Affective well-being indicates the changes in the predominance of positive or negative affects of people in response to their daily experiences. For measuring affective well-being, people are usually asked to indicate their affective states in several episodes that they experienced in a day. However, such conventional methods have been problematic in terms of their burdensomeness on participants and the validity of the rating. To overcome these problems, we attempted to introduce a new approach for measuring affective states, based on the combination of the day reconstruction method and the measurement of physiological stress levels by a wristband-type wearable device. As the indicator of physiological stress levels, we used heart rate variability calculated from the data recorded by the device. We examined the interpretability of the physiological stress level as a substitute for affective states by applying this combinational approach to an aging and depopulating village in Japan, because the well-being of the residents in such areas is a matter of public concern. As a result, we could depict the sources of affective well-being and the physiological stressors in the village. We also found a reasonable, but weak, correlation between the scores of affective states and the indicator of physiological stress levels. We discussed the challenges that should be overcome for utilizing the physiological stress level as a substitute for affective state.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-017-0006-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50043249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-09DOI: 10.1007/s41133-017-0004-4
Thomas M. Schnieders, Richard T. Stone, Tyler Oviatt, Erik Danford-Klein
The Armed Robotic Control for Training in Civilian Law Enforcement, or ARCTiC LawE, is an upper-body exoskeleton designed to assist civilian, military, and law enforcement personnel in accurate, precise, and reliable handgun techniques. This exoskeleton training utilizes a laser-based handgun with similar dimensions, trigger pull, and break action to a Glock® 19 pistol, common to both public and private security sectors. The paper aims to train and test subjects with no handgun training/experience both with and without the ARCTiC LawE and compare the results of accuracy, precision, and speed. Ultimately, the exoskeleton greatly impacts sensory motor learning, and the biomechanical implications are confirmed via both performance and physiological measurements. The researchers believe the ARCTiC LawE is a viable substitute for training with live-fire handguns in order to reduce the cost of training time and munitions. They also believe the ARCTiC LawE will increase accuracy and precision for typical law enforcement and military live-fire drills. Additionally, this paper increases the breadth of knowledge for exoskeletons as a tool for training.
{"title":"ARCTiC LawE: An Upper-Body Exoskeleton for Firearm Training","authors":"Thomas M. Schnieders, Richard T. Stone, Tyler Oviatt, Erik Danford-Klein","doi":"10.1007/s41133-017-0004-4","DOIUrl":"10.1007/s41133-017-0004-4","url":null,"abstract":"<div><p>The Armed Robotic Control for Training in Civilian Law Enforcement, or ARCTiC LawE, is an upper-body exoskeleton designed to assist civilian, military, and law enforcement personnel in accurate, precise, and reliable handgun techniques. This exoskeleton training utilizes a laser-based handgun with similar dimensions, trigger pull, and break action to a Glock<sup>®</sup> 19 pistol, common to both public and private security sectors. The paper aims to train and test subjects with no handgun training/experience both with and without the ARCTiC LawE and compare the results of accuracy, precision, and speed. Ultimately, the exoskeleton greatly impacts sensory motor learning, and the biomechanical implications are confirmed via both performance and physiological measurements. The researchers believe the ARCTiC LawE is a viable substitute for training with live-fire handguns in order to reduce the cost of training time and munitions. They also believe the ARCTiC LawE will increase accuracy and precision for typical law enforcement and military live-fire drills. Additionally, this paper increases the breadth of knowledge for exoskeletons as a tool for training.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-017-0004-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50015702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-10DOI: 10.1007/s41133-016-0003-x
Marina Cidota, Stephan Lukosch, Dragos Datcu, Heide Lukosch
In many fields of activity, working in teams is necessary for completing tasks in a proper manner and often requires visual context-related information to be exchanged between team members. In such a collaborative environment, awareness of other people’s activity is an important feature of shared-workspace collaboration. We have developed an augmented reality framework for virtual colocation that supports visual communication between two people who are in different physical locations. We address these people as the remote user, who uses a laptop and the local user, who wears a head-mounted display with an RGB camera. The remote user can assist the local user in solving a spatial problem, by providing instructions in form of virtual objects in the view of the local user. For annotating the shared workspace, we use the state-of-the-art algorithm for localization and mapping without markers that provides “anchors” in the 3D space for placing virtual content. In this paper, we report on a user study that explores on how automatic audio and visual notifications about the remote user’s activities affect the local user’s workspace awareness. We used an existing game to research virtual colocation, addressing a spatial challenge on increasing levels of task complexity. The results of the user study show that participants clearly preferred visual notifications over audio or no notifications, no matter the level of the difficulty of the task.
{"title":"Comparing the Effect of Audio and Visual Notifications on Workspace Awareness Using Head-Mounted Displays for Remote Collaboration in Augmented Reality","authors":"Marina Cidota, Stephan Lukosch, Dragos Datcu, Heide Lukosch","doi":"10.1007/s41133-016-0003-x","DOIUrl":"10.1007/s41133-016-0003-x","url":null,"abstract":"<div><p>In many fields of activity, working in teams is necessary for completing tasks in a proper manner and often requires visual context-related information to be exchanged between team members. In such a collaborative environment, awareness of other people’s activity is an important feature of shared-workspace collaboration. We have developed an augmented reality framework for virtual colocation that supports visual communication between two people who are in different physical locations. We address these people as the remote user, who uses a laptop and the local user, who wears a head-mounted display with an RGB camera. The remote user can assist the local user in solving a spatial problem, by providing instructions in form of virtual objects in the view of the local user. For annotating the shared workspace, we use the state-of-the-art algorithm for localization and mapping without markers that provides “anchors” in the 3D space for placing virtual content. In this paper, we report on a user study that explores on how automatic audio and visual notifications about the remote user’s activities affect the local user’s workspace awareness. We used an existing game to research virtual colocation, addressing a spatial challenge on increasing levels of task complexity. The results of the user study show that participants clearly preferred visual notifications over audio or no notifications, no matter the level of the difficulty of the task.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-016-0003-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50018469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main contribution of this study is realization of a method that enhances the effect of touch in remote communication between persons of the same gender by changing the gender impression with a voice changer during telecommunication. Although psychological studies have revealed that touch has various positive effects such as triggering altruistic behavior and persuading others, these effects are restrained in some cases, especially in same-gender communication, because a touch between persons of the same gender tends to cause unpleasant feelings. However, “Transcendent Telepresence,” which enhances positive psychological effects and suppresses negative effects by modifying the information transmitted via telecommunication, enables us to overcome this problem. We hypothesized that telepresence that modifies people’s gender impression reduces this unpleasantness and enhances the effect of touch. We tested the effectiveness of this method in a situation in which a male operator asked male participants to perform a monotonous task. The results showed that a touch by a male operator whose voice was changed to female-like could reduce the boredom of the task and improve the friendliness toward the operator. We believe this method realizes effective communication in various fields including telemedicine, crowdsourcing, and remote education.
{"title":"Gender-Impression Modification Enhances the Effect of Mediated Social Touch Between Persons of the Same Gender","authors":"Keita Suzuki, Masanori Yokoyama, Yuki Kionshita, Takayoshi Mochizuki, Tomohiro Yamada, Sho Sakurai, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose","doi":"10.1007/s41133-016-0002-y","DOIUrl":"10.1007/s41133-016-0002-y","url":null,"abstract":"<div><p>The main contribution of this study is realization of a method that enhances the effect of touch in remote communication between persons of the same gender by changing the gender impression with a voice changer during telecommunication. Although psychological studies have revealed that touch has various positive effects such as triggering altruistic behavior and persuading others, these effects are restrained in some cases, especially in same-gender communication, because a touch between persons of the same gender tends to cause unpleasant feelings. However, “Transcendent Telepresence,” which enhances positive psychological effects and suppresses negative effects by modifying the information transmitted via telecommunication, enables us to overcome this problem. We hypothesized that telepresence that modifies people’s gender impression reduces this unpleasantness and enhances the effect of touch. We tested the effectiveness of this method in a situation in which a male operator asked male participants to perform a monotonous task. The results showed that a touch by a male operator whose voice was changed to female-like could reduce the boredom of the task and improve the friendliness toward the operator. We believe this method realizes effective communication in various fields including telemedicine, crowdsourcing, and remote education.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-016-0002-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50018548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-10DOI: 10.1007/s41133-016-0001-z
Kusuma Mohanchandra, Snehanshu Saha
Recent science and technology studies in neuroscience, rehabilitation, and machine learning have focused attention on the EEG-based brain–computer interface (BCI) as an exciting field of research. Though the primary goal of the BCI has been to restore communication in the severely paralyzed, BCI for speech communication has acquired recognition in a variety of non-medical fields. These fields include silent speech communication, cognitive biometrics, and synthetic telepathy, to name a few. Though potentially a very sensitive issue on various counts, it is likely to revolutionize the whole system of communication. Considering the wide range of application, this paper presents innovative research on BCI for speech communication. Since imagined speech suffers from quite a few factors, we have chosen to focus on subvocalized speech for the current work. The current work is considered to be the first to utilize the subvocal verbalization for EEG-based BCI in speech communication. The electrical signals generated by the human brain during subvocalized speech are captured, analyzed, and interpreted as speech. Further, the processed EEG signals are used to drive a speech synthesizer, enabling communication and acoustical feedback for the user. We attempt to demonstrate and justify that the BCI is capable of providing good results. The basis of this effort is the presumption that, whether the speech is overt or covert, it always originates in the mind. The scalp maps provide evidence that subvocal speech prediction, from the neurological signals, is achievable. The statistical results obtained from the current study demonstrate that speech prediction is possible. EEG signals suffer from the curse of dimensionality due to the intrinsic biological and electromagnetic complexities. Therefore, in the current work, the subset selection method, using pairwise cross-correlation, is proposed to reduce the size of the data while minimizing loss of information. The prominent variances obtained from the SSM, based on principal representative features, were deployed to analyze multiclass EEG signals. A multiclass support vector machine is used for the classification of EEG signals of five subvocalized words extracted from scalp electrodes. Though the current work identifies many challenges, the promise of this technology is exhibited.
{"title":"A Communication Paradigm Using Subvocalized Speech: Translating Brain Signals into Speech","authors":"Kusuma Mohanchandra, Snehanshu Saha","doi":"10.1007/s41133-016-0001-z","DOIUrl":"10.1007/s41133-016-0001-z","url":null,"abstract":"<div><p>Recent science and technology studies in neuroscience, rehabilitation, and machine learning have focused attention on the EEG-based brain–computer interface (BCI) as an exciting field of research. Though the primary goal of the BCI has been to restore communication in the severely paralyzed, BCI for speech communication has acquired recognition in a variety of non-medical fields. These fields include silent speech communication, cognitive biometrics, and synthetic telepathy, to name a few. Though potentially a very sensitive issue on various counts, it is likely to revolutionize the whole system of communication. Considering the wide range of application, this paper presents innovative research on BCI for speech communication. Since imagined speech suffers from quite a few factors, we have chosen to focus on subvocalized speech for the current work. The current work is considered to be the first to utilize the subvocal verbalization for EEG-based BCI in speech communication. The electrical signals generated by the human brain during subvocalized speech are captured, analyzed, and interpreted as speech. Further, the processed EEG signals are used to drive a speech synthesizer, enabling communication and acoustical feedback for the user. We attempt to demonstrate and justify that the BCI is capable of providing good results. The basis of this effort is the presumption that, whether the speech is overt or covert, it always originates in the mind. The scalp maps provide evidence that subvocal speech prediction, from the neurological signals, is achievable. The statistical results obtained from the current study demonstrate that speech prediction is possible. EEG signals suffer from the curse of dimensionality due to the intrinsic biological and electromagnetic complexities. Therefore, in the current work, the subset selection method, using pairwise cross-correlation, is proposed to reduce the size of the data while minimizing loss of information. The prominent variances obtained from the SSM, based on principal representative features, were deployed to analyze multiclass EEG signals. A multiclass support vector machine is used for the classification of EEG signals of five subvocalized words extracted from scalp electrodes. Though the current work identifies many challenges, the promise of this technology is exhibited.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-016-0001-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50018468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}