Pub Date : 2019-04-15DOI: 10.1007/s41133-019-0016-3
Simon Elias Bibri
This study sets out to understand and explore awareness in software engineering as a form of collaborative work in terms of its entailment and implication. In doing so, it intends to answer the following two questions: What are the aspects of awareness that are most relevant to the domain of software engineering? In what ways does awareness affect coordination and information sharing among collaborating actors in the domain of software engineering? An ethnographic methodology was espoused in order to test existing theoretical perspectives and generalizations pertaining to cooperative awareness. The study shows that what constitutes awareness beyond what and where of work context is, to a great extent, contingent upon the requirements and objectives of each professional domain with respect to the nature and complexity of joint activities and the manner in which the collaborating actors interact with each other accordingly. This also depends on the role they play in cooperative endeavors. The aspects that are most relevant to the domain of software engineering include work-oriented information on each others’ activities, tasks as moment-by-moment work processes, general information of all collaborating actors, detailed information about each others’ attention and responsibility as to ongoing projects, changes to shared document repositories and workspaces, and the delicate interplay between individual and cooperative activities. Moreover, the study corroborates the importance of awareness information for successful and effective collaboration in the domain of software engineering in terms of coordination and information sharing. Accordingly, awareness affects coordination and information sharing in ways that it ensures that the contributions of collaborating actors are relevant to the coordinated work and their actions are evaluated according to the coordinated goals, reduces the effort associated with coordination and information sharing in terms of using CSCW synchronous and asynchronous technologies as well as the frequency of this use, keeps the collaboration efforts to a minimum, enables to structure activities and avoid duplication of work and to fine-grain synergistic group and shared working behavior in relation to coordinated tasks, and acts as a foundation for (and predictor for) closer collaboration in terms of coordination and information sharing. In addition, in the domain of software engineering, the contextual conditions that determine the manner in which coordination and information sharing can be achieved are mostly personal, experiential, and/or professional. This study suggests that it is important to focus on awareness as a set of theoretical perspectives and to support technologies pertaining to work practice in relevance to the domain when designing and evaluating CSCW systems in order to achieve more effective and efficient outcomes in terms of coordination and information sharing as social processes.
{"title":"Awareness Aspects and Effects on Coordination and Information Sharing in the Domain of Software Engineering as CSCW: An Ethnographic Study","authors":"Simon Elias Bibri","doi":"10.1007/s41133-019-0016-3","DOIUrl":"10.1007/s41133-019-0016-3","url":null,"abstract":"<div><p>This study sets out to understand and explore awareness in software engineering as a form of collaborative work in terms of its entailment and implication. In doing so, it intends to answer the following two questions: What are the aspects of awareness that are most relevant to the domain of software engineering? In what ways does awareness affect coordination and information sharing among collaborating actors in the domain of software engineering? An ethnographic methodology was espoused in order to test existing theoretical perspectives and generalizations pertaining to cooperative awareness. The study shows that what constitutes awareness beyond what and where of work context is, to a great extent, contingent upon the requirements and objectives of each professional domain with respect to the nature and complexity of joint activities and the manner in which the collaborating actors interact with each other accordingly. This also depends on the role they play in cooperative endeavors. The aspects that are most relevant to the domain of software engineering include work-oriented information on each others’ activities, tasks as moment-by-moment work processes, general information of all collaborating actors, detailed information about each others’ attention and responsibility as to ongoing projects, changes to shared document repositories and workspaces, and the delicate interplay between individual and cooperative activities. Moreover, the study corroborates the importance of awareness information for successful and effective collaboration in the domain of software engineering in terms of coordination and information sharing. Accordingly, awareness affects coordination and information sharing in ways that it ensures that the contributions of collaborating actors are relevant to the coordinated work and their actions are evaluated according to the coordinated goals, reduces the effort associated with coordination and information sharing in terms of using CSCW synchronous and asynchronous technologies as well as the frequency of this use, keeps the collaboration efforts to a minimum, enables to structure activities and avoid duplication of work and to fine-grain synergistic group and shared working behavior in relation to coordinated tasks, and acts as a foundation for (and predictor for) closer collaboration in terms of coordination and information sharing. In addition, in the domain of software engineering, the contextual conditions that determine the manner in which coordination and information sharing can be achieved are mostly personal, experiential, and/or professional. This study suggests that it is important to focus on awareness as a set of theoretical perspectives and to support technologies pertaining to work practice in relevance to the domain when designing and evaluating CSCW systems in order to achieve more effective and efficient outcomes in terms of coordination and information sharing as social processes.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-019-0016-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50027430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-07DOI: 10.1007/s41133-019-0015-4
Herbert Câmara Nick, Maria Lucia Machado Duarte, Pedro Vieira Xavier
Advances in the technology of mobile devices and wireless computers have transformed the way individuals communicate, making people’s use of such devices very frequent while traveling by means of transport. Vibrations are found in a variety of human activities, such as travel, leisure and work activities, and therefore, exposure is unavoidable to the individual. In order to maximize the time during the trip, people try to keep their minds active with study or work activities or in search of socialization for the purpose of distraction or entertainment. This has sparked the interest of the scientific community in conducting studies focusing on how this interaction of the individual with the cellular can affect cognitive performance. The present article intends to constitute an analysis on the effects of whole-body vibration exposure on cognition in university students assessed through the use of a game application for cognitive brain training on a cell phone or tablet for both genders. The aim of this work is to present the results of the influence of whole-body vibration on cognition in different variables (gender, type of device used and frequency of exposure) as a contribution to scientific and technological research. Forty people, being 20 females and 20 males, participated in the experimental test, having to exercise their cognitive abilities through the use of an application for brain-training games. It was based on traffic light images. When the green light appeared, the subjects had to click as quick as possible on the screen, being the processing speed the objective measure. The individuals were subdivided into groups to be able to verify the influence of the vibration according to the type of mobile device (smartphone or tablet) used for the game, gender of the volunteer (female or male) and whole-body vibration frequency (5 Hz or 30 Hz) and with 0.8 m/s2 of amplitude. The WBV exposure duration was 10 min. After the exposure, participants remained at rest for 5 min while playing a new round of the game. In this way, it was possible to acquire data before and during exposure to vibration and following a period of rest after the exposure. Considering the collected data, a nonparametric statistical analysis (Mann–Whitney tests) was necessary. The results showed decay in the game score during the time with vibration in relation to the initial time without vibration. However, there is a tendency for recovery in the score obtained in the game during the time at rest after the exposure to the vibration in relation to the initial time at rest without vibration. Comparing the use of cell phone versus tablet, it is evident that for the same-gender group under the influence of vibration at the same frequency, better results are obtained with the use of tablet. With exposure to whole-body vibration at the 30 Hz frequency, volunteers presented better results than those subjected to the effects of whole-body vibration at 5 Hz freque
{"title":"The Effects of Whole-Body Vibration (WBV) Evaluated Using Cognitive Brain-Training App Games on Tablet or Cell phone for Both Genders","authors":"Herbert Câmara Nick, Maria Lucia Machado Duarte, Pedro Vieira Xavier","doi":"10.1007/s41133-019-0015-4","DOIUrl":"10.1007/s41133-019-0015-4","url":null,"abstract":"<div><p>Advances in the technology of mobile devices and wireless computers have transformed the way individuals communicate, making people’s use of such devices very frequent while traveling by means of transport. Vibrations are found in a variety of human activities, such as travel, leisure and work activities, and therefore, exposure is unavoidable to the individual. In order to maximize the time during the trip, people try to keep their minds active with study or work activities or in search of socialization for the purpose of distraction or entertainment. This has sparked the interest of the scientific community in conducting studies focusing on how this interaction of the individual with the cellular can affect cognitive performance. The present article intends to constitute an analysis on the effects of whole-body vibration exposure on cognition in university students assessed through the use of a game application for cognitive brain training on a cell phone or tablet for both genders. The aim of this work is to present the results of the influence of whole-body vibration on cognition in different variables (gender, type of device used and frequency of exposure) as a contribution to scientific and technological research. Forty people, being 20 females and 20 males, participated in the experimental test, having to exercise their cognitive abilities through the use of an application for brain-training games. It was based on traffic light images. When the green light appeared, the subjects had to click as quick as possible on the screen, being the processing speed the objective measure. The individuals were subdivided into groups to be able to verify the influence of the vibration according to the type of mobile device (smartphone or tablet) used for the game, gender of the volunteer (female or male) and whole-body vibration frequency (5 Hz or 30 Hz) and with 0.8 m/s<sup>2</sup> of amplitude. The WBV exposure duration was 10 min. After the exposure, participants remained at rest for 5 min while playing a new round of the game. In this way, it was possible to acquire data before and during exposure to vibration and following a period of rest after the exposure. Considering the collected data, a nonparametric statistical analysis (Mann–Whitney tests) was necessary. The results showed decay in the game score during the time with vibration in relation to the initial time without vibration. However, there is a tendency for recovery in the score obtained in the game during the time at rest after the exposure to the vibration in relation to the initial time at rest without vibration. Comparing the use of cell phone versus tablet, it is evident that for the same-gender group under the influence of vibration at the same frequency, better results are obtained with the use of tablet. With exposure to whole-body vibration at the 30 Hz frequency, volunteers presented better results than those subjected to the effects of whole-body vibration at 5 Hz freque","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-019-0015-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49997362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-25DOI: 10.1007/s41133-019-0014-5
Agya Ram Verma, Yashvir Singh
In this paper, the design of adaptive artifact canceler (AAC) filter using bacteria foraging optimization (BFO) algorithm is presented. The performance of proposed AAC filter is tested on a corrupted ECG signal. Based on simulation results, it is observed that the AAC filter designed with BFO technique achieves significant improvement in fidelity parameters such as SNR, NRMSE, and NRME when compared with other reported algorithms in the literature. AAC filter based on BFO technique provides 6 dB improvement in output SNR, 85% reduction in NRMSE, and 90% lower NRME as compared to recently reported AAC filter based on ABC-SF algorithm. Further, AAC filter using BFO technique enhances the coherence between pure and reconstructed ECG signals.
{"title":"Adaptive Artifact Cancelation Based on Bacteria Foraging Optimization for ECG Signal","authors":"Agya Ram Verma, Yashvir Singh","doi":"10.1007/s41133-019-0014-5","DOIUrl":"10.1007/s41133-019-0014-5","url":null,"abstract":"<div><p>In this paper, the design of adaptive artifact canceler (AAC) filter using bacteria foraging optimization (BFO) algorithm is presented. The performance of proposed AAC filter is tested on a corrupted ECG signal. Based on simulation results, it is observed that the AAC filter designed with BFO technique achieves significant improvement in fidelity parameters such as SNR, NRMSE, and NRME when compared with other reported algorithms in the literature. AAC filter based on BFO technique provides 6 dB improvement in output SNR, 85% reduction in NRMSE, and 90% lower NRME as compared to recently reported AAC filter based on ABC-SF algorithm. Further, AAC filter using BFO technique enhances the coherence between pure and reconstructed ECG signals.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-019-0014-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50047203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.1007/s41133-019-0012-7
Richard Osuala, Jieyi Li, Ognjen Arandjelovic
The increasing trend of systematic collection of medical data (diagnoses, hospital admission emergencies, blood test results, scans, etc) by healthcare providers offers an unprecedented opportunity for the application of modern data mining, pattern recognition, and machine learning algorithms. The ultimate aim is invariably that of improving outcomes, be it directly or indirectly. Notwithstanding the successes of recent research efforts in this realm, a major obstacle of making the developed models usable by medical professionals (rather than computer scientists or statisticians) remains largely unaddressed. Yet, a mounting amount of evidence shows that the ability to understand and easily use novel technologies is a major factor governing how widely adopted by the target users (doctors, nurses, and patients, amongst others) they are likely to be. In this work we address this technical gap. In particular, we describe a portable, web-based interface that allows healthcare professionals to interact with recently developed machine learning and data driven prognostic algorithms. Our application interfaces a statistical disease progression model and displays its predictions in an intuitive and readily understandable manner. Different types of geometric primitives and their visual properties (such as size or colour) are used to represent abstract quantities such as probability density functions, the rate of change of relative probabilities, and a series of other relevant statistics which the heathcare professional can use to explore patients’ risk factors or provide personalized, evidence and data driven incentivization to the patient.
{"title":"Bringing Modern Machine Learning into Clinical Practice Through the Use of Intuitive Visualization and Human–Computer Interaction","authors":"Richard Osuala, Jieyi Li, Ognjen Arandjelovic","doi":"10.1007/s41133-019-0012-7","DOIUrl":"10.1007/s41133-019-0012-7","url":null,"abstract":"<div><p>The increasing trend of systematic collection of medical data (diagnoses, hospital admission emergencies, blood test results, scans, etc) by healthcare providers offers an unprecedented opportunity for the application of modern data mining, pattern recognition, and machine learning algorithms. The ultimate aim is invariably that of improving outcomes, be it directly or indirectly. Notwithstanding the successes of recent research efforts in this realm, a major obstacle of making the developed models usable by medical professionals (rather than computer scientists or statisticians) remains largely unaddressed. Yet, a mounting amount of evidence shows that the ability to understand and easily use novel technologies is a major factor governing how widely adopted by the target users (doctors, nurses, and patients, amongst others) they are likely to be. In this work we address this technical gap. In particular, we describe a portable, web-based interface that allows healthcare professionals to interact with recently developed machine learning and data driven prognostic algorithms. Our application interfaces a statistical disease progression model and displays its predictions in an intuitive and readily understandable manner. Different types of geometric primitives and their visual properties (such as size or colour) are used to represent abstract quantities such as probability density functions, the rate of change of relative probabilities, and a series of other relevant statistics which the heathcare professional can use to explore patients’ risk factors or provide personalized, evidence and data driven incentivization to the patient.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-019-0012-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50082820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-12DOI: 10.1007/s41133-019-0013-6
Nuanphan Kaewpanukrangsi, Chutisant Kerdvibulvech
Arm swing is a well-known exercise practiced throughout Asia. The benefits of regular exercise affect a person’s gait, shoulder function as well as head and body strength. This paper presents a new study of gesture-based communication by using arm swing as a gestural input command applied to any digital device used by persons aged 21–50 years of age during their workday. In five difference occupations, i.e., student, lecturer, office workers, designer and industrial worker, with different routine tasks were studied. The sampling comprised 30 participants, 6 persons representing each occupation. Data were collected through focus groups. The objective of this study is to provide possibilities to integrate proper user experience of gesture-based communication toward an arm-swing input device onto participants’ routine activities. It is believed this research can have a significant impact on quality on human daily lives. A part of user experience methodology, like a field experiment, is applied to provoke participants to follow the correct sequence case by case. This paper adopts user experience measurement for this qualitative research approach. A proper user experience occurs when one understands how strength and weakness are affected by gestural input. An input gesture, feedback, system and output can then be mapped together into a complete scenario of a person’s daily life. Each participant is encouraged to raise various ideas on how he/she accesses technology rather than on how technologies allow them to access information. Based on the experimental results, this paper will then discuss the possible tasks performed in the five occupations in which an arm-swing input gesture is integrated. The influence of this integration into the participants’ daily activities presents confirmation of a preferable wearable input device for all focus groups. This wearable input device can then be integrated into activities without any interruption in a person’s main actions.
{"title":"Gesture-Based Communication via User Experience Design: Integrating Experience into Daily Life for an Arm-Swing Input Device","authors":"Nuanphan Kaewpanukrangsi, Chutisant Kerdvibulvech","doi":"10.1007/s41133-019-0013-6","DOIUrl":"10.1007/s41133-019-0013-6","url":null,"abstract":"<div><p>Arm swing is a well-known exercise practiced throughout Asia. The benefits of regular exercise affect a person’s gait, shoulder function as well as head and body strength. This paper presents a new study of gesture-based communication by using arm swing as a gestural input command applied to any digital device used by persons aged 21–50 years of age during their workday. In five difference occupations, i.e., student, lecturer, office workers, designer and industrial worker, with different routine tasks were studied. The sampling comprised 30 participants, 6 persons representing each occupation. Data were collected through focus groups. The objective of this study is to provide possibilities to integrate proper user experience of gesture-based communication toward an arm-swing input device onto participants’ routine activities. It is believed this research can have a significant impact on quality on human daily lives. A part of user experience methodology, like a field experiment, is applied to provoke participants to follow the correct sequence case by case. This paper adopts user experience measurement for this qualitative research approach. A proper user experience occurs when one understands how strength and weakness are affected by gestural input. An input gesture, feedback, system and output can then be mapped together into a complete scenario of a person’s daily life. Each participant is encouraged to raise various ideas on how he/she accesses technology rather than on how technologies allow them to access information. Based on the experimental results, this paper will then discuss the possible tasks performed in the five occupations in which an arm-swing input gesture is integrated. The influence of this integration into the participants’ daily activities presents confirmation of a preferable wearable input device for all focus groups. This wearable input device can then be integrated into activities without any interruption in a person’s main actions.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-019-0013-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50044069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-08DOI: 10.1007/s41133-019-0011-8
Leanne Hirshfield, Phil Bobko, Alex Barelka, Natalie Sommer, Senem Velipasalar
With terms like ‘fake news’ and ‘cyber attack’ dominating the news, skepticism toward the media and other online individuals has become a major facet of modern life. This paper views the way we process information during HCI through the lens of suspicion, a mentally taxing state that people enter before making a judgment about whether or not to trust information. With the goal of enabling objective, real-time measurements of suspicion during HCI, we describe an experiment where fNIRS was used to identify the neural correlates of suspicion in the brain. We developed a convolutional long short-term memory classifier that predicts suspicion using a leave-one-participant-out cross-validation scheme, with average accuracy greater than 76%. Notably, the brain regions implicated by our results dovetail with prior theoretical definitions of suspicion. We describe implications of this work for HCI, to augment users’ capabilities by enabling them to develop a ‘healthy skepticism’ to parse out truth from fiction online.
{"title":"Toward Interfaces that Help Users Identify Misinformation Online: Using fNIRS to Measure Suspicion","authors":"Leanne Hirshfield, Phil Bobko, Alex Barelka, Natalie Sommer, Senem Velipasalar","doi":"10.1007/s41133-019-0011-8","DOIUrl":"10.1007/s41133-019-0011-8","url":null,"abstract":"<div><p>With terms like ‘fake news’ and ‘cyber attack’ dominating the news, skepticism toward the media and other online individuals has become a major facet of modern life. This paper views the way we process information during HCI through the lens of suspicion, a mentally taxing state that people enter before making a judgment about whether or not to trust information. With the goal of enabling objective, real-time measurements of suspicion during HCI, we describe an experiment where fNIRS was used to identify the neural correlates of suspicion in the brain. We developed a convolutional long short-term memory classifier that predicts suspicion using a <i>leave</i>-<i>one</i>-<i>participant</i>-<i>out</i> cross-validation scheme, with average accuracy greater than 76%. Notably, the brain regions implicated by our results dovetail with prior theoretical definitions of suspicion. We describe implications of this work for HCI, to augment users’ capabilities by enabling them to develop a ‘healthy skepticism’ to parse out truth from fiction online.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-019-0011-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50029020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-06DOI: 10.1007/s41133-017-0010-6
Kevin Fan, Akihiko Murai, Natsuki Miyata, Yuta Sugiura, Mitsunori Tada
We present a multi-embodiment interface aimed at assisting human-centered ergonomics design, where traditionally the design process is hindered by the need of recruiting diverse users or the utilization of disembodied simulations to address designing for most groups of the population. The multi-embodiment solution is to actively embody the user in the design and evaluation process in virtual reality, while simultaneously superimposing additional simulated virtual bodies on the user’s own body. This superimposed body acts as the target and enables simultaneous anthropometrical ergonomics evaluation for both the user’s self and the target. Both virtual bodies of self and target are generated using digital human modeling from statistical data, and the animation of self-body is motion-captured while the target body is moved using a weighted inverse kinematics approach with end effectors on the hands and feet. We conducted user studies to evaluate human ergonomics design in five scenarios in virtual reality, comparing multi-embodiment with single embodiment. Similar evaluations were conducted again in the physical environment after virtual reality evaluations to explore the post-VR influence of different virtual experience.
{"title":"Multi-Embodiment of Digital Humans in Virtual Reality for Assisting Human-Centered Ergonomics Design","authors":"Kevin Fan, Akihiko Murai, Natsuki Miyata, Yuta Sugiura, Mitsunori Tada","doi":"10.1007/s41133-017-0010-6","DOIUrl":"10.1007/s41133-017-0010-6","url":null,"abstract":"<div><p>We present a multi-embodiment interface aimed at assisting human-centered ergonomics design, where traditionally the design process is hindered by the need of recruiting diverse users or the utilization of disembodied simulations to address designing for most groups of the population. The multi-embodiment solution is to actively embody the user in the design and evaluation process in virtual reality, while simultaneously superimposing additional simulated virtual bodies on the user’s own body. This superimposed body acts as the target and enables simultaneous anthropometrical ergonomics evaluation for both the user’s self and the target. Both virtual bodies of self and target are generated using digital human modeling from statistical data, and the animation of self-body is motion-captured while the target body is moved using a weighted inverse kinematics approach with end effectors on the hands and feet. We conducted user studies to evaluate human ergonomics design in five scenarios in virtual reality, comparing multi-embodiment with single embodiment. Similar evaluations were conducted again in the physical environment after virtual reality evaluations to explore the post-VR influence of different virtual experience.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-017-0010-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50011609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-08DOI: 10.1007/s41133-017-0009-z
Mihai Bâce, Sander Staal, Gábor Sörös, Giorgio Corbellini
Many real-life scenarios can benefit from both physical proximity and natural gesture interaction. In this paper, we explore shared collocated interactions on unmodified wearable devices. We introduce an interaction technique which enables a small group of people to interact using natural gestures. The proximity of users and devices is detected through acoustic ranging using inaudible signals, while in-air hand gestures are recognized from three-axis accelerometers. The underlying wireless communication between the devices is handled over Bluetooth for scalability and extensibility. We present (1) an overview of the interaction technique and (2) an extensive evaluation using unmodified, off-the-shelf, mobile, and wearable devices which show the feasibility of the method. Finally, we demonstrate the resulting design space with three examples of multi-user application scenarios.
{"title":"Collocated Multi-user Gestural Interactions with Unmodified Wearable Devices","authors":"Mihai Bâce, Sander Staal, Gábor Sörös, Giorgio Corbellini","doi":"10.1007/s41133-017-0009-z","DOIUrl":"10.1007/s41133-017-0009-z","url":null,"abstract":"<div><p>Many real-life scenarios can benefit from both physical proximity and natural gesture interaction. In this paper, we explore shared collocated interactions on unmodified wearable devices. We introduce an interaction technique which enables a small group of people to interact using natural gestures. The proximity of users and devices is detected through acoustic ranging using inaudible signals, while in-air hand gestures are recognized from three-axis accelerometers. The underlying wireless communication between the devices is handled over Bluetooth for scalability and extensibility. We present (1) an overview of the interaction technique and (2) an extensive evaluation using unmodified, off-the-shelf, mobile, and wearable devices which show the feasibility of the method. Finally, we demonstrate the resulting design space with three examples of multi-user application scenarios.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-017-0009-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50014468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-28DOI: 10.1007/s41133-017-0008-0
Jari Kangas, Jussi Rantala, Roope Raisamo
Augmented attention, assisting the user in noticing important things, is one of the ways human action can be enhanced with technologies. We investigated how vibrotactile stimulation given to the forehead could be used to cue gaze direction. We built a vibrotactile headband with an array of six actuators that presented short, tap-like cues. In the first experiment, the participant was instructed to look at the point on a horizontal line that they thought the vibrotactile cue was pointing to. Analysis of the participant’s gaze points showed that for the majority there were statistically significant differences between cues from different actuators. This indicated that the six actuators could successfully direct the participant’s gaze to different areas of the visual field. In addition, vibrotactile cueing of gaze direction could be used for directing visual attention and providing navigation cues with wearable headbands. To strengthen our findings, we investigated how effective the vibrotactile stimulation would be to cue gaze direction in a visual search task. Participant’s were asked to find a deviant shape (a target) from a display full of simple shapes. The vibrotactile cueing implemented with the headband device was used to inform the participants of the approximate horizontal position of the target in three different experimental conditions. In the most informative condition, six actuators were used to inform the participant of the horizontal area where the target would be found, in the second condition two actuators were used to inform the participant of the target side on the display (left or right), and in the least informative condition no directional information was given. Analysis of the trial completion times showed that there were statistically significant differences between the least informative condition and the two other conditions. However, we did not find significant differences in trial completion times between the two conditions where information of the target location was given. This indicated that while the actuators could successfully direct the participant’s attention to different areas of the visual field to help in the search task, the simple approach of just adding actuators and dividing the visual field to more sub-areas did not improve the results. The findings of this study showed that while there is potential in using vibrotactile cueing of gaze direction, more research is needed to fully exploit it.
{"title":"Gaze Cueing with a Vibrotactile Headband for a Visual Search Task","authors":"Jari Kangas, Jussi Rantala, Roope Raisamo","doi":"10.1007/s41133-017-0008-0","DOIUrl":"10.1007/s41133-017-0008-0","url":null,"abstract":"<div><p>Augmented attention, assisting the user in noticing important things, is one of the ways human action can be enhanced with technologies. We investigated how vibrotactile stimulation given to the forehead could be used to cue gaze direction. We built a vibrotactile headband with an array of six actuators that presented short, tap-like cues. In the first experiment, the participant was instructed to look at the point on a horizontal line that they thought the vibrotactile cue was pointing to. Analysis of the participant’s gaze points showed that for the majority there were statistically significant differences between cues from different actuators. This indicated that the six actuators could successfully direct the participant’s gaze to different areas of the visual field. In addition, vibrotactile cueing of gaze direction could be used for directing visual attention and providing navigation cues with wearable headbands. To strengthen our findings, we investigated how effective the vibrotactile stimulation would be to cue gaze direction in a visual search task. Participant’s were asked to find a deviant shape (a target) from a display full of simple shapes. The vibrotactile cueing implemented with the headband device was used to inform the participants of the approximate horizontal position of the target in three different experimental conditions. In the most informative condition, six actuators were used to inform the participant of the horizontal area where the target would be found, in the second condition two actuators were used to inform the participant of the target side on the display (left or right), and in the least informative condition no directional information was given. Analysis of the trial completion times showed that there were statistically significant differences between the least informative condition and the two other conditions. However, we did not find significant differences in trial completion times between the two conditions where information of the target location was given. This indicated that while the actuators could successfully direct the participant’s attention to different areas of the visual field to help in the search task, the simple approach of just adding actuators and dividing the visual field to more sub-areas did not improve the results. The findings of this study showed that while there is potential in using vibrotactile cueing of gaze direction, more research is needed to fully exploit it.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-017-0008-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50052168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-27DOI: 10.1007/s41133-017-0007-1
Ognjen Arandjelović
The penetration of mathematical modelling in sports science to date has been highly limited. In particular and in contrast to most other scientific disciplines, sports science research has been characterized by comparatively little effort investment in the development of phenomenological models. Practical applications of such models aimed at assisting trainees or sports professionals more generally remain nonexistent. The present paper aims at addressing this gap. We adopt a recently proposed mathematical model of neuromuscular engagement and adaptation, and develop around it an algorithmic framework which allows it to be employed in actual training program design and monitoring by resistance training practitioners (coaches or athletes). We first show how training performance characteristics can be extracted from video sequences, effortlessly and with minimal human input, using computer vision. The extracted characteristics are then used to fit the adopted model i.e. to estimate the values of its free parameters, from differential equations of motion in what is usually termed the inverse dynamics problem. A computer simulation of training bouts using the estimated (and hence athlete specific) model is used to predict the effected adaptation and with it the expected changes in future performance capabilities. Lastly we describe a proof-of-concept software tool we developed which allows the practitioner to manipulate training parameters and immediately see their effect on predicted adaptation (again, on an athlete specific basis). Thus, this work presents a holistic view of the monitoring–assessment–adjustment loop which lies at the centre of successful coaching. By bridging the gap between theoretical and applied aspects of sports science, the present contribution highlights the potential of mathematical and computational modelling in this field and serves to encourage further research focus in this direction.
{"title":"Computer-Aided Parameter Selection for Resistance Exercise Using Machine Vision-Based Capability Profile Estimation","authors":"Ognjen Arandjelović","doi":"10.1007/s41133-017-0007-1","DOIUrl":"10.1007/s41133-017-0007-1","url":null,"abstract":"<div><p>The penetration of mathematical modelling in sports science to date has been highly limited. In particular and in contrast to most other scientific disciplines, sports science research has been characterized by comparatively little effort investment in the development of phenomenological models. Practical applications of such models aimed at assisting trainees or sports professionals more generally remain nonexistent. The present paper aims at addressing this gap. We adopt a recently proposed mathematical model of neuromuscular engagement and adaptation, and develop around it an algorithmic framework which allows it to be employed in actual training program design and monitoring by resistance training practitioners (coaches or athletes). We first show how training performance characteristics can be extracted from video sequences, effortlessly and with minimal human input, using computer vision. The extracted characteristics are then used to fit the adopted model i.e. to estimate the values of its free parameters, from differential equations of motion in what is usually termed the inverse dynamics problem. A computer simulation of training bouts using the estimated (and hence athlete specific) model is used to predict the effected adaptation and with it the expected changes in future performance capabilities. Lastly we describe a proof-of-concept software tool we developed which allows the practitioner to manipulate training parameters and immediately see their effect on predicted adaptation (again, on an athlete specific basis). Thus, this work presents a holistic view of the monitoring–assessment–adjustment loop which lies at the centre of successful coaching. By bridging the gap between theoretical and applied aspects of sports science, the present contribution highlights the potential of mathematical and computational modelling in this field and serves to encourage further research focus in this direction.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-017-0007-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50103270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}