S. Amrouche, Benedikt Gollan, A. Ferscha, Josef Heftberger
In coherence with the ongoing digitalization of production processes, Human Computer Interaction (HCI) technologies have evolved rapidly in industrial applications, providing abundant numbers of the versatile tracking and monitoring devices suitable to address complex challenges. This paper focuses on Activity Segmentation and Activity Identification as one of the most crucial challenges in pervasive computing, applying only visual attention features captured through mobile eye-tracking sensors. We propose a novel, application-independent approach towards segmentation of task executions in semi-manual industrial assembly setup via exploiting the expressive properties of the distribution-based gaze feature Nearest Neighbor Index (NNI) to build a dynamic activity segmentation algorithm. The proposed approach is enriched with a machine learning validation model acting as a feedback loop to classify segments qualities. The approach is evaluated in an alpine ski assembly scenario with real-world data reaching an overall of 91% detection accuracy.
{"title":"Activity Segmentation and Identification based on Eye Gaze Features","authors":"S. Amrouche, Benedikt Gollan, A. Ferscha, Josef Heftberger","doi":"10.1145/3197768.3197775","DOIUrl":"https://doi.org/10.1145/3197768.3197775","url":null,"abstract":"In coherence with the ongoing digitalization of production processes, Human Computer Interaction (HCI) technologies have evolved rapidly in industrial applications, providing abundant numbers of the versatile tracking and monitoring devices suitable to address complex challenges. This paper focuses on Activity Segmentation and Activity Identification as one of the most crucial challenges in pervasive computing, applying only visual attention features captured through mobile eye-tracking sensors. We propose a novel, application-independent approach towards segmentation of task executions in semi-manual industrial assembly setup via exploiting the expressive properties of the distribution-based gaze feature Nearest Neighbor Index (NNI) to build a dynamic activity segmentation algorithm. The proposed approach is enriched with a machine learning validation model acting as a feedback loop to classify segments qualities. The approach is evaluated in an alpine ski assembly scenario with real-world data reaching an overall of 91% detection accuracy.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126143178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the proliferation of robotic assistants, such as robot vacuum cleaners, telepresence robots, or shopping assistance robots, human-robot interaction becomes increasingly more natural. The capabilities of robots are expanding, which leads to an increasing need for a natural human-robot communication and interaction. Therefore, the modalities of text- or speech-based communication have to be extended by body language and a direct feedback such as emotion or non-verbal communication. In this paper, we present a camera-based, non-body contact optical heart rate recognition method that can be used in robots in order to identify humans' reactions during a robot-human communication or interaction. For the purpose of heart rate and heart rate variability detection, we used standard cameras (webcams) that are located inside the robot's eye. Although camera-based vital sign identification has been discussed in previous research, we noticed that certain limitations with regard to real-world applications do still exist. We identified artificial light sources as one of the main influencing factors. Therefore, we propose strategies with the aim Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from of improving natural communication between social robots and humans.
{"title":"Touchless heart rate Recognition by Robots to support natural Human-Robot Communication","authors":"G. Bieber, Niklas Antony, Marian Haescher","doi":"10.1145/3197768.3203181","DOIUrl":"https://doi.org/10.1145/3197768.3203181","url":null,"abstract":"With the proliferation of robotic assistants, such as robot vacuum cleaners, telepresence robots, or shopping assistance robots, human-robot interaction becomes increasingly more natural. The capabilities of robots are expanding, which leads to an increasing need for a natural human-robot communication and interaction. Therefore, the modalities of text- or speech-based communication have to be extended by body language and a direct feedback such as emotion or non-verbal communication. In this paper, we present a camera-based, non-body contact optical heart rate recognition method that can be used in robots in order to identify humans' reactions during a robot-human communication or interaction. For the purpose of heart rate and heart rate variability detection, we used standard cameras (webcams) that are located inside the robot's eye. Although camera-based vital sign identification has been discussed in previous research, we noticed that certain limitations with regard to real-world applications do still exist. We identified artificial light sources as one of the main influencing factors. Therefore, we propose strategies with the aim Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from of improving natural communication between social robots and humans.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123159869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Solachidis, I. Paliokas, N. Vretos, K. Votis, U. Cortés, D. Tzovaras
This paper compares two methodological approaches derived from the EU Horizon 2020 funded projects CAREGIVERSPROMMD (C-MMD)1 and ICT4LIFE2. Both approaches were initiated in 2016 with the ambition to provide new integrated care services to people living with cognitive impairments, including Dementia, Alzheimer and Parkinson disease, as well as to their home caregivers towards a long-term increase in quality of life and autonomy at home. An outline of the disparities and similarities related to non-pharmacological interventions introduced by the two projects to foster treatment adherence was made. Both approaches have developed software solutions, including social platforms, notifications, Serious Games, user monitoring and support services aimed at developing the concepts of self-care, active patients and integrated care. Besides their differences, both projects can be benefited by knowledge and technology exchange, pilot results sharing and possible user's exchange if possible in the near future.
{"title":"Two examples of online eHealth platforms for supporting people living with cognitive impairments and their caregivers","authors":"V. Solachidis, I. Paliokas, N. Vretos, K. Votis, U. Cortés, D. Tzovaras","doi":"10.1145/3197768.3201556","DOIUrl":"https://doi.org/10.1145/3197768.3201556","url":null,"abstract":"This paper compares two methodological approaches derived from the EU Horizon 2020 funded projects CAREGIVERSPROMMD (C-MMD)1 and ICT4LIFE2. Both approaches were initiated in 2016 with the ambition to provide new integrated care services to people living with cognitive impairments, including Dementia, Alzheimer and Parkinson disease, as well as to their home caregivers towards a long-term increase in quality of life and autonomy at home. An outline of the disparities and similarities related to non-pharmacological interventions introduced by the two projects to foster treatment adherence was made. Both approaches have developed software solutions, including social platforms, notifications, Serious Games, user monitoring and support services aimed at developing the concepts of self-care, active patients and integrated care. Besides their differences, both projects can be benefited by knowledge and technology exchange, pilot results sharing and possible user's exchange if possible in the near future.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131525665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marlene Schafler, F. J. Lacueva-Pérez, L. Hannola, Stelios Damalas, Jan Nierhoff, Thomas Herrmann
It is well-known that the introduction of innovative digital tools in manufacturing due to Industry 4.0 has far-reaching effects on an organizational and on an individual level. The H2020 funded project FACTS4WORKERS - Worker-Centric Workplaces in Smart Factories - aims to develop user-centered assistance systems in order to demonstrate their impact and applicability at the shop floor. To do so it is important to understand how to develop such tools and how to assess if advantages can be derived from the created ICT system. This study introduces the technology of a workplace solution that is linked to a specific industrial challenge. Subsequently, a 2-stepped approach to evaluate the presented system is discussed. Heuristics, which are an output of project "Heuristics for Industry 4.0" are used to test if the developed solution covers critical aspects of socio-technical system design. Insights into the design, development and holistic evaluation of digital tools at the shop floor should be shown.
{"title":"Insights into the Introduction of Digital Interventions at the shop floor","authors":"Marlene Schafler, F. J. Lacueva-Pérez, L. Hannola, Stelios Damalas, Jan Nierhoff, Thomas Herrmann","doi":"10.1145/3197768.3203176","DOIUrl":"https://doi.org/10.1145/3197768.3203176","url":null,"abstract":"It is well-known that the introduction of innovative digital tools in manufacturing due to Industry 4.0 has far-reaching effects on an organizational and on an individual level. The H2020 funded project FACTS4WORKERS - Worker-Centric Workplaces in Smart Factories - aims to develop user-centered assistance systems in order to demonstrate their impact and applicability at the shop floor. To do so it is important to understand how to develop such tools and how to assess if advantages can be derived from the created ICT system. This study introduces the technology of a workplace solution that is linked to a specific industrial challenge. Subsequently, a 2-stepped approach to evaluate the presented system is discussed. Heuristics, which are an output of project \"Heuristics for Industry 4.0\" are used to test if the developed solution covers critical aspects of socio-technical system design. Insights into the design, development and holistic evaluation of digital tools at the shop floor should be shown.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125548557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michalis Papakostas, K. Tsiakas, M. Abujelala, M. Bell, F. Makedon
Recent research has shown that hundreds of millions of workers worldwide may lose their jobs to robots and automation by 2030, impacting over 40 developed and emerging countries and affecting more than 800 types of jobs. While automation promises to increase productivity and relieve workers from tedious or heavy-duty tasks, it can also widen the gap, leaving behind workers who lack automation training. In this project, we propose to build a technologically based, personalized vocational cyberlearning training system, where the user is assessed while immersed in a simulated workplace/factory task environment, and the system collecting and analyzing multisensory cognitive, behavioral and physiological data. Such a system, will produce recommendations to support targeted vocational training decision-making. The focus is on collecting and analyzing specific neurocognitive functions that include, working memory, attention, cognitive overload and cognitive flexibility. Collected data are analyzed to reveal, in iterative fashion, relationships between physiological and cognitive performance metrics, and how these relate to work-related behavioral patterns that require special vocational training.
{"title":"v-CAT: A Cyberlearning Framework for Personalized Cognitive Skill Assessment and Training","authors":"Michalis Papakostas, K. Tsiakas, M. Abujelala, M. Bell, F. Makedon","doi":"10.1145/3197768.3201545","DOIUrl":"https://doi.org/10.1145/3197768.3201545","url":null,"abstract":"Recent research has shown that hundreds of millions of workers worldwide may lose their jobs to robots and automation by 2030, impacting over 40 developed and emerging countries and affecting more than 800 types of jobs. While automation promises to increase productivity and relieve workers from tedious or heavy-duty tasks, it can also widen the gap, leaving behind workers who lack automation training. In this project, we propose to build a technologically based, personalized vocational cyberlearning training system, where the user is assessed while immersed in a simulated workplace/factory task environment, and the system collecting and analyzing multisensory cognitive, behavioral and physiological data. Such a system, will produce recommendations to support targeted vocational training decision-making. The focus is on collecting and analyzing specific neurocognitive functions that include, working memory, attention, cognitive overload and cognitive flexibility. Collected data are analyzed to reveal, in iterative fashion, relationships between physiological and cognitive performance metrics, and how these relate to work-related behavioral patterns that require special vocational training.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125692940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Avila, Francisco Kiss, Ismael Rodríguez, A. Schmidt, Tonja Machulla
Touchscreen devices (e.g., smartphones, tablets) are a major means of accessing digital resources. However, touchscreen accessibility remains a challenge for users with visual impairments. Mainstream solutions implicitly favor a sequential navigation of digital information. This precludes users from enjoying the advantages of well laid-out, visually-structured documents, especially for certain tasks (e.g., text navigation). In this paper, we introduce tactile sheets---engraved paper sheets that represent the layout of a specific page and that are used as an overlay on a capacitive touchscreen device. Via engraved tactile patterns and textures, users can locate and discriminate different content areas, navigate the spatially-distributed content non-sequentially and access speech feedback with gestures. We report a comparative study with nine visually-impaired users that investigates the technical feasibility and the usability of this approach. Specifically, we compared a mainstream screen reader and two different types of tactile sheets. A similar level of usability is achieved between conditions. Also, participants' qualitative feedback provides strong arguments for the use of tactile pattern overlays. Finally, we introduce a processing pipeline for automatically create tactile sheets based on an existing e-book.
{"title":"Tactile sheets: using engraved paper overlays to facilitate access to a digital document's layout and logical structure","authors":"M. Avila, Francisco Kiss, Ismael Rodríguez, A. Schmidt, Tonja Machulla","doi":"10.1145/3197768.3201530","DOIUrl":"https://doi.org/10.1145/3197768.3201530","url":null,"abstract":"Touchscreen devices (e.g., smartphones, tablets) are a major means of accessing digital resources. However, touchscreen accessibility remains a challenge for users with visual impairments. Mainstream solutions implicitly favor a sequential navigation of digital information. This precludes users from enjoying the advantages of well laid-out, visually-structured documents, especially for certain tasks (e.g., text navigation). In this paper, we introduce tactile sheets---engraved paper sheets that represent the layout of a specific page and that are used as an overlay on a capacitive touchscreen device. Via engraved tactile patterns and textures, users can locate and discriminate different content areas, navigate the spatially-distributed content non-sequentially and access speech feedback with gestures. We report a comparative study with nine visually-impaired users that investigates the technical feasibility and the usability of this approach. Specifically, we compared a mainstream screen reader and two different types of tactile sheets. A similar level of usability is achieved between conditions. Also, participants' qualitative feedback provides strong arguments for the use of tactile pattern overlays. Finally, we introduce a processing pipeline for automatically create tactile sheets based on an existing e-book.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128884902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audio augmented reality (AR) allows for the simultaneous perception of the real environment and a virtual audio overlay. This is especially important in a mobile use context, where users should be continuously aware of their surroundings, such as in the case of urban tourism, when tourists explore foreign cities and their tourist sights. In this work, we investigate the design and implementation of audio AR systems in urban tourism. Our prototype, called AudioNear, is designed to support tourists' exploration of open, urban environments while providing speech-based information about surrounding tourist sights, based on the users' location. At this stage, we present the design concept of AudioNear, its hardware implementation and the first usability feedback. Overall, the study indicated the promising potential of audio AR for providing informative tourist services and engaging experiences.
{"title":"Smart Tourism in Cities: Exploring Urban Destinations with Audio Augmented Reality","authors":"Costas Boletsis, Dimitra Chasanidou","doi":"10.1145/3197768.3201549","DOIUrl":"https://doi.org/10.1145/3197768.3201549","url":null,"abstract":"Audio augmented reality (AR) allows for the simultaneous perception of the real environment and a virtual audio overlay. This is especially important in a mobile use context, where users should be continuously aware of their surroundings, such as in the case of urban tourism, when tourists explore foreign cities and their tourist sights. In this work, we investigate the design and implementation of audio AR systems in urban tourism. Our prototype, called AudioNear, is designed to support tourists' exploration of open, urban environments while providing speech-based information about surrounding tourist sights, based on the users' location. At this stage, we present the design concept of AudioNear, its hardware implementation and the first usability feedback. Overall, the study indicated the promising potential of audio AR for providing informative tourist services and engaging experiences.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131112761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Haslgrübler, Florian Jungwirth, Michaela Murauer, A. Ferscha
Recently, there has been increased interest in eye tracking and eye-based human-computer interaction research, as gaze reveals attention and intention information, making it highly interesting as an input modality for cognitive assistant systems. In this paper, we present how to apply gaze-awareness in an industrial environment to create such a cognitive assistant system. A prototypical system was built upon a mobile eye tracker, that analyses the user's field of view, to recognize possible objects of interest, while simultaneously checking for gaze fixations. A twenty-one participants user study was carried out, which builds upon established hypothesis of a correlation between eye fixations and the relevance of objects and additionally shows that this relationship changes over multiple tasks and runs, revealing learning improvements and task difficulty. Both features are vital for a reduction of amount of the support an assistance system provides towards its users, resulting in an overall better user experience.
{"title":"Visually Perceived Relevance of Objects reveals Learning Improvements and Task Difficulty","authors":"Michael Haslgrübler, Florian Jungwirth, Michaela Murauer, A. Ferscha","doi":"10.1145/3197768.3201520","DOIUrl":"https://doi.org/10.1145/3197768.3201520","url":null,"abstract":"Recently, there has been increased interest in eye tracking and eye-based human-computer interaction research, as gaze reveals attention and intention information, making it highly interesting as an input modality for cognitive assistant systems. In this paper, we present how to apply gaze-awareness in an industrial environment to create such a cognitive assistant system. A prototypical system was built upon a mobile eye tracker, that analyses the user's field of view, to recognize possible objects of interest, while simultaneously checking for gaze fixations. A twenty-one participants user study was carried out, which builds upon established hypothesis of a correlation between eye fixations and the relevance of objects and additionally shows that this relationship changes over multiple tasks and runs, revealing learning improvements and task difficulty. Both features are vital for a reduction of amount of the support an assistance system provides towards its users, resulting in an overall better user experience.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131962674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With Industry 4.0 manufacturing facilities experience a dynamic change towards smart factories. Digitalization of processes accompanied by extensive data acquisition enables process planners to utilize innovative technologies like Augmented Reality (AR) to support employees in their tasks on the shop floor. Among others a visualization in the user's field of view allows hands-free work and avoids unnecessary head movements. To make use of the employee's expertise, we try to involve workers with the aid of Design Thinking at an early stage in the AR-interface design process. A pervasive method in app design is Paper Prototyping. Early sketches of user interfaces serve as common language in interdisciplinary teams. In this paper we develop a method based on Paper Prototyping especially for AR content needs, which is called Photo Prototyping. This method overlays the user's field of view with sketches of the future AR-content. Furthermore we elucidate our experience, we've gained during a workshop with 40 workers out of logistics in the automotive industry.
{"title":"Design Thinking: Using Photo Prototyping for a user-centered Interface Design for Pick-by-Vision Systems","authors":"Nela Murauer","doi":"10.1145/3197768.3201532","DOIUrl":"https://doi.org/10.1145/3197768.3201532","url":null,"abstract":"With Industry 4.0 manufacturing facilities experience a dynamic change towards smart factories. Digitalization of processes accompanied by extensive data acquisition enables process planners to utilize innovative technologies like Augmented Reality (AR) to support employees in their tasks on the shop floor. Among others a visualization in the user's field of view allows hands-free work and avoids unnecessary head movements. To make use of the employee's expertise, we try to involve workers with the aid of Design Thinking at an early stage in the AR-interface design process. A pervasive method in app design is Paper Prototyping. Early sketches of user interfaces serve as common language in interdisciplinary teams. In this paper we develop a method based on Paper Prototyping especially for AR content needs, which is called Photo Prototyping. This method overlays the user's field of view with sketches of the future AR-content. Furthermore we elucidate our experience, we've gained during a workshop with 40 workers out of logistics in the automotive industry.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130291533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nils Volkening, Anirudh Unni, Sabeth Becker, J. Rieger, Sebastian J. F. Fudickar, A. Hein
This paper presents a new mobile near-infrared functional spectroscopy (fNIRS) device, with digital detectors that can be placed anywhere on the head and fit into standard caps to measure cortical brain activation. The device's functionality was evaluated in two steps, i.e. first, by means of simple pulse measurements and second, in a motor cortex study with nine subjects. In this study, the subjects had to alternate between right and left hands while using hand-held strength trainers. While the signals from the mobile prototype were not yet stable enough across all channels to perform analysis such as statistical parametric mapping, it was able to measure significant brain activation changes over the area of the motor cortex with the mobile prototype when the contralateral hand was activated in four subjects. In contrast, the device was yet unable to measure ipsilateral activities. The problems encountered and possible methods to improve signal acquisition are discussed at the end of the paper.
{"title":"Development of a Mobile Functional Near-infrared Spectroscopy Prototype and its Initial Evaluation: Lessons Learned","authors":"Nils Volkening, Anirudh Unni, Sabeth Becker, J. Rieger, Sebastian J. F. Fudickar, A. Hein","doi":"10.1145/3197768.3201534","DOIUrl":"https://doi.org/10.1145/3197768.3201534","url":null,"abstract":"This paper presents a new mobile near-infrared functional spectroscopy (fNIRS) device, with digital detectors that can be placed anywhere on the head and fit into standard caps to measure cortical brain activation. The device's functionality was evaluated in two steps, i.e. first, by means of simple pulse measurements and second, in a motor cortex study with nine subjects. In this study, the subjects had to alternate between right and left hands while using hand-held strength trainers. While the signals from the mobile prototype were not yet stable enough across all channels to perform analysis such as statistical parametric mapping, it was able to measure significant brain activation changes over the area of the motor cortex with the mobile prototype when the contralateral hand was activated in four subjects. In contrast, the device was yet unable to measure ipsilateral activities. The problems encountered and possible methods to improve signal acquisition are discussed at the end of the paper.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130303973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}