Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209374
Sinan Coruk, M. C. Yildirim, Ahmet Talha Kansizoglu, Oguzhan Dalgic, B. Ugurlu
This study presents the hardware development and low level controller structure of an upper-body exoskeleton that is equipped with high torque-to-weight ratio actuators. It is intended to be used in industrial applications. The exoskeleton can be adjusted for various arm sizes and can ideally be used by an operator that has a height within the range of 160 cm and 200 cm. The robot structure was comprised of 4 degrees of freedom, 3 of which are powered via custom-built series elastic actuators with high power-to-weight ratio and real-time torque control capability. The 4$^{th}$ joint, a prismatic joint, was added to accommodate for glenohumeral head elevation, enabling the system to attain a workspace that is suitable for industrial tasks. The exoskeleton is equipped with a two-piece end effector (E1 and E2) to enable the power augmentation tasks. In order to check torque controllability, initial experiments of the system were conducted on a joint level. As a result, 20 Hz of control bandwidth was achieved when the peak-to-peak torque inputs were 20 Nm.
{"title":"Design and Development of a Powered Upper Limb Exoskeleton with High Payload Capacity for Industrial Operations","authors":"Sinan Coruk, M. C. Yildirim, Ahmet Talha Kansizoglu, Oguzhan Dalgic, B. Ugurlu","doi":"10.1109/ICHMS49158.2020.9209374","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209374","url":null,"abstract":"This study presents the hardware development and low level controller structure of an upper-body exoskeleton that is equipped with high torque-to-weight ratio actuators. It is intended to be used in industrial applications. The exoskeleton can be adjusted for various arm sizes and can ideally be used by an operator that has a height within the range of 160 cm and 200 cm. The robot structure was comprised of 4 degrees of freedom, 3 of which are powered via custom-built series elastic actuators with high power-to-weight ratio and real-time torque control capability. The 4$^{th}$ joint, a prismatic joint, was added to accommodate for glenohumeral head elevation, enabling the system to attain a workspace that is suitable for industrial tasks. The exoskeleton is equipped with a two-piece end effector (E1 and E2) to enable the power augmentation tasks. In order to check torque controllability, initial experiments of the system were conducted on a joint level. As a result, 20 Hz of control bandwidth was achieved when the peak-to-peak torque inputs were 20 Nm.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121056226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209583
Norman Weißkirchen, Ronald Böck, A. Wendemuth
While the original aim of assistant systems is the reduction of the workload of their user, this is often not the result within state-of-the-art systems. One reason is that the current generation of assistant systems tends to be used as user interfaces for information accessing and as pre-planned control systems for external applications. Most of these are based on a straightforward control from their user or leave the responsibility of the user decision in the hand of the user, which in turn requires a constant supervision to assure a flawless execution. As this goes contrary to the idea of taking workload from a user and limits the general potential of research for artificial intelligence towards an incremental advancement of interpreting the user, we propose a different approach for an integrated human-machine interaction. In our approach we supply the machine part with the ability to generate their own aims and resolution steps, constrained by general and specific rules concerning the particular task of the system. While using the capabilities of cognitive architectures, similar to the way humans process information, we propose that machines could interact with, and independently side by side to their user in a fully integrated human-machine environment. With the added advantage of, for example, greater independence from constant supervision, the ability to generate new solution steps and to adapt to new problems arises. By combining the two concepts of assistant systems and cognitive architectures, we can create a system which is capable of seamless human-machine interaction and integration, like a peer to their user instead of a servant or a simple assistant.
{"title":"Towards True Artificial Peers","authors":"Norman Weißkirchen, Ronald Böck, A. Wendemuth","doi":"10.1109/ICHMS49158.2020.9209583","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209583","url":null,"abstract":"While the original aim of assistant systems is the reduction of the workload of their user, this is often not the result within state-of-the-art systems. One reason is that the current generation of assistant systems tends to be used as user interfaces for information accessing and as pre-planned control systems for external applications. Most of these are based on a straightforward control from their user or leave the responsibility of the user decision in the hand of the user, which in turn requires a constant supervision to assure a flawless execution. As this goes contrary to the idea of taking workload from a user and limits the general potential of research for artificial intelligence towards an incremental advancement of interpreting the user, we propose a different approach for an integrated human-machine interaction. In our approach we supply the machine part with the ability to generate their own aims and resolution steps, constrained by general and specific rules concerning the particular task of the system. While using the capabilities of cognitive architectures, similar to the way humans process information, we propose that machines could interact with, and independently side by side to their user in a fully integrated human-machine environment. With the added advantage of, for example, greater independence from constant supervision, the ability to generate new solution steps and to adapt to new problems arises. By combining the two concepts of assistant systems and cognitive architectures, we can create a system which is capable of seamless human-machine interaction and integration, like a peer to their user instead of a servant or a simple assistant.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131731131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209549
M. Masters, A. Schulte
Functional near-infrared spectroscopy (fNIRS) has been used with moderate success in many passive brain-computer interface applications. Much of this recent work has been focused on differentiating between various states shortly following discrete stimuli. We aim to extend these results to the to the assessment of an operator’s mental state during the complex environment encountered by helicopter pilots. This work presents initial efforts made in this direction. Stepping though phases of increasing complexity, fNIRS data from the pre-frontal cortex were collected and analyzed from four participants as they completed n-back tests, discrete flight simulator tasks, and during abbreviated simulated medevac mission scenarios. Data collected during the n-back tests and discrete simulator tasks were found not to be significantly clustered in the feature space considered. A support vector machine (SVM) classifier was trained on the n-back data to differentiate between workload levels and applied to the discrete simulator task data achieving an average 3-class classification accuracy of 57% and an average 2-class classification accuracy of 68%. Finally, this classifier was applied to the data collected during the simulated mission and the result was found to be only weakly correlated with the participant’s subjectively-assessed workload. Due to these results, it is not yet clear how an n-back-trained classifier could be utilized to augment an adaptive crew support system. We suggest that the levels of “workload” measured by an n-back test should not be expected to “map onto” other, more complex, subjective evaluations of “workload.” Strong hemodynamic responses observed during mission execution however, suggest fNIRS may contain data relevant for the augmentation of an adaptive assistant system.
{"title":"Investigating the Utility of fNIRS to Assess Mental Workload in a Simulated Helicopter Environment","authors":"M. Masters, A. Schulte","doi":"10.1109/ICHMS49158.2020.9209549","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209549","url":null,"abstract":"Functional near-infrared spectroscopy (fNIRS) has been used with moderate success in many passive brain-computer interface applications. Much of this recent work has been focused on differentiating between various states shortly following discrete stimuli. We aim to extend these results to the to the assessment of an operator’s mental state during the complex environment encountered by helicopter pilots. This work presents initial efforts made in this direction. Stepping though phases of increasing complexity, fNIRS data from the pre-frontal cortex were collected and analyzed from four participants as they completed n-back tests, discrete flight simulator tasks, and during abbreviated simulated medevac mission scenarios. Data collected during the n-back tests and discrete simulator tasks were found not to be significantly clustered in the feature space considered. A support vector machine (SVM) classifier was trained on the n-back data to differentiate between workload levels and applied to the discrete simulator task data achieving an average 3-class classification accuracy of 57% and an average 2-class classification accuracy of 68%. Finally, this classifier was applied to the data collected during the simulated mission and the result was found to be only weakly correlated with the participant’s subjectively-assessed workload. Due to these results, it is not yet clear how an n-back-trained classifier could be utilized to augment an adaptive crew support system. We suggest that the levels of “workload” measured by an n-back test should not be expected to “map onto” other, more complex, subjective evaluations of “workload.” Strong hemodynamic responses observed during mission execution however, suggest fNIRS may contain data relevant for the augmentation of an adaptive assistant system.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130686115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209433
G. Fortino, F. Messina, D. Rosaci, G. Sarné, Claudio Savaglio
In the Internet of Things (IoT) age, the majority of real environments will become smart through the massive spread of novel devices empowered with cyber-physical abilities. Provided with increasing computation capabilities and pervasively deployed, smart IoT devices (SDs) will exhibit pro-active behaviors and will perform more and more complex tasks, thus enabling the provision of advanced cyber-physical services which make the environment smarter and smarter. Such technology advancements together with an increased environmental awareness suggested the exploitation of these SDs for natural environments monitoring purposes. However, in presence of many and heterogeneous SDs, the formation of good teams requires high levels of trustworthiness among the members and, therefore, it is necessary to adequately represent their mutual trustworthiness. To this aim, the contribution provided by this paper consists in (i) defining a trust measure combining the reputation of SDs and the precision of their sensory data; (ii) designing a framework which adopts such measures as main criteria for the formation of temporary teams of humans and SDs; (iii) testing the proposed trust-based framework on a case study simulating the collaborative monitoring of a natural environment. The obtained results confirmed the potential improvements in the teams composition in terms of both performance and appreciation.
{"title":"Collaborative Environmental Monitoring through Teams of Trusted IoT devices","authors":"G. Fortino, F. Messina, D. Rosaci, G. Sarné, Claudio Savaglio","doi":"10.1109/ICHMS49158.2020.9209433","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209433","url":null,"abstract":"In the Internet of Things (IoT) age, the majority of real environments will become smart through the massive spread of novel devices empowered with cyber-physical abilities. Provided with increasing computation capabilities and pervasively deployed, smart IoT devices (SDs) will exhibit pro-active behaviors and will perform more and more complex tasks, thus enabling the provision of advanced cyber-physical services which make the environment smarter and smarter. Such technology advancements together with an increased environmental awareness suggested the exploitation of these SDs for natural environments monitoring purposes. However, in presence of many and heterogeneous SDs, the formation of good teams requires high levels of trustworthiness among the members and, therefore, it is necessary to adequately represent their mutual trustworthiness. To this aim, the contribution provided by this paper consists in (i) defining a trust measure combining the reputation of SDs and the precision of their sensory data; (ii) designing a framework which adopts such measures as main criteria for the formation of temporary teams of humans and SDs; (iii) testing the proposed trust-based framework on a case study simulating the collaborative monitoring of a natural environment. The obtained results confirmed the potential improvements in the teams composition in terms of both performance and appreciation.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114308755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209510
P. Baronti, M. Girolami, Fabio Mavilia, Filippo Palumbo, Giancarlo Luisetto
Detecting the dynamics of the social interaction represents a difficult task also with the adoption of sensing devices able to collect data with a high-temporal resolution. Under this context, this work focuses on the effect of the body posture for the purpose of detecting a face-to-face interactions between individuals. To this purpose, we describe the NESTORE sensing kit that we used to collect a significant dataset that mimics some common postures of subjects while interacting. Our experimental results distinguish clearly those postures that negatively affect the quality of the signals used for detecting an interactions, from those postures that do not have such a negative impact. We also show the performance of the SID (Social Interaction Detector) algorithm with different settings, and we present its performance in terms of accuracy during the classification of interaction and non-interaction events.
{"title":"On the Analysis of Human Posture for Detecting Social Interactions with Wearable Devices","authors":"P. Baronti, M. Girolami, Fabio Mavilia, Filippo Palumbo, Giancarlo Luisetto","doi":"10.1109/ICHMS49158.2020.9209510","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209510","url":null,"abstract":"Detecting the dynamics of the social interaction represents a difficult task also with the adoption of sensing devices able to collect data with a high-temporal resolution. Under this context, this work focuses on the effect of the body posture for the purpose of detecting a face-to-face interactions between individuals. To this purpose, we describe the NESTORE sensing kit that we used to collect a significant dataset that mimics some common postures of subjects while interacting. Our experimental results distinguish clearly those postures that negatively affect the quality of the signals used for detecting an interactions, from those postures that do not have such a negative impact. We also show the performance of the SID (Social Interaction Detector) algorithm with different settings, and we present its performance in terms of accuracy during the classification of interaction and non-interaction events.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129457194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209512
Guangzeng Chen, Pengyu Jie, Tongyi Shang, Y. Lou, T. T. Wong, Li Chen, Jian Liu
Robots are integrating into human society and become an important part. The robots and human must be considered as a whole system and the robots should be intelligent both in digital and physical like human to cooperate and interact with human non-physically or physically. For improving the physical intelligence of the robot, a magnetorheological fluid actuator (MRA) is proposed and studied in this paper. A collision model and a speed peak force planning method are proposed and some experiments are conducted. It shows that the collision peak force of the MRA with external objects can be easily controlled to different levels by controlling the compliance of the MRA for safe physical interaction with human or some striking works like hammering a nail.
{"title":"Design Validation and Collision Studies of A Compliant Magnetorheological Fluid Actuator","authors":"Guangzeng Chen, Pengyu Jie, Tongyi Shang, Y. Lou, T. T. Wong, Li Chen, Jian Liu","doi":"10.1109/ICHMS49158.2020.9209512","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209512","url":null,"abstract":"Robots are integrating into human society and become an important part. The robots and human must be considered as a whole system and the robots should be intelligent both in digital and physical like human to cooperate and interact with human non-physically or physically. For improving the physical intelligence of the robot, a magnetorheological fluid actuator (MRA) is proposed and studied in this paper. A collision model and a speed peak force planning method are proposed and some experiments are conducted. It shows that the collision peak force of the MRA with external objects can be easily controlled to different levels by controlling the compliance of the MRA for safe physical interaction with human or some striking works like hammering a nail.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130055370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209531
Mejdi Dallel, Vincent Havard, D. Baudry, X. Savatier
Nowadays, humans and robots are working more closely together. This increases business productivity and product quality, leading to efficiency and growth. However, human and robot collaboration is rather static; robots move to a specific position then humans perform their tasks while being assisted by the robots. In order to get a dynamic collaboration, robots need to understand the human’s intention and learn to recognize the performed actions complementing therefore his capabilities and relieving him of arduous tasks. Consequently, there is a need for a human action recognition dataset for Machine Learning algorithms. Currently available depth-based and RGB+D+S based human action recognition datasets have a number of limitations, counting the lack of training samples along with distinct class labels, camera views, diversity of subjects and more importantly the absence of actual industrial human actions in an industrial environment. Actual action recognition datasets include simple daily, mutual, or healthrelated actions. Therefore, in this paper we introduce an RGB+ S dataset named “Industrial Human Action Recognition Dataset” (InHARD) from a real-world setting for industrial human action recognition with over 2 million frames, collected from 16 distinct subjects. This dataset contains 13 different industrial action classes and over 4800 action samples. The introduction of this dataset should allow us the study and development of various learning techniques for the task of human actions analysis inside industrial environments involving human robot collaborations.
{"title":"InHARD - Industrial Human Action Recognition Dataset in the Context of Industrial Collaborative Robotics","authors":"Mejdi Dallel, Vincent Havard, D. Baudry, X. Savatier","doi":"10.1109/ICHMS49158.2020.9209531","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209531","url":null,"abstract":"Nowadays, humans and robots are working more closely together. This increases business productivity and product quality, leading to efficiency and growth. However, human and robot collaboration is rather static; robots move to a specific position then humans perform their tasks while being assisted by the robots. In order to get a dynamic collaboration, robots need to understand the human’s intention and learn to recognize the performed actions complementing therefore his capabilities and relieving him of arduous tasks. Consequently, there is a need for a human action recognition dataset for Machine Learning algorithms. Currently available depth-based and RGB+D+S based human action recognition datasets have a number of limitations, counting the lack of training samples along with distinct class labels, camera views, diversity of subjects and more importantly the absence of actual industrial human actions in an industrial environment. Actual action recognition datasets include simple daily, mutual, or healthrelated actions. Therefore, in this paper we introduce an RGB+ S dataset named “Industrial Human Action Recognition Dataset” (InHARD) from a real-world setting for industrial human action recognition with over 2 million frames, collected from 16 distinct subjects. This dataset contains 13 different industrial action classes and over 4800 action samples. The introduction of this dataset should allow us the study and development of various learning techniques for the task of human actions analysis inside industrial environments involving human robot collaborations.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129176156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209492
D. V. Baelen, J. Ellerbroek, R. Paassen, D. Abbink, M. Mulder
Previous research showed that haptic feedback, in the form of asymmetric vibrations, can be used to provide directional cues to the operator in a laboratory setting. Nevertheless, it is unclear how these vibrations should be designed for pilots controlling their aircraft using a side-stick. This paper aims to determine the magnitude and shape for which vibrations can still be perceived as directional cues, for one fixed frequency based on literature. The threshold magnitude of two forcing function shapes (triangular and saw tooth) was determined for both pulling and pushing cues in a just-noticeable-difference experiment. Participants were asked to report the direction at varying input magnitudes while exerting different offset force levels on the stick at different positions. Results confirmed all hypotheses: they indicated a lower perception threshold for the asymmetric saw tooth shaped vibration compared to a triangular shaped; higher offset force decreased the threshold in the opposite direction; and stick position had no effect on the obtained thresholds. Based on the experiment we advise to use saw tooth vibrations with an amplitude higher than 0.094 Nm.
{"title":"Just Feeling the Force: Just Noticeable Difference for Asymmetric Vibrations","authors":"D. V. Baelen, J. Ellerbroek, R. Paassen, D. Abbink, M. Mulder","doi":"10.1109/ICHMS49158.2020.9209492","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209492","url":null,"abstract":"Previous research showed that haptic feedback, in the form of asymmetric vibrations, can be used to provide directional cues to the operator in a laboratory setting. Nevertheless, it is unclear how these vibrations should be designed for pilots controlling their aircraft using a side-stick. This paper aims to determine the magnitude and shape for which vibrations can still be perceived as directional cues, for one fixed frequency based on literature. The threshold magnitude of two forcing function shapes (triangular and saw tooth) was determined for both pulling and pushing cues in a just-noticeable-difference experiment. Participants were asked to report the direction at varying input magnitudes while exerting different offset force levels on the stick at different positions. Results confirmed all hypotheses: they indicated a lower perception threshold for the asymmetric saw tooth shaped vibration compared to a triangular shaped; higher offset force decreased the threshold in the opposite direction; and stick position had no effect on the obtained thresholds. Based on the experiment we advise to use saw tooth vibrations with an amplitude higher than 0.094 Nm.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128312671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209410
K. Sycara, Dana Hughes, Huao Li, M. Lewis, Nina Lauharatanahirun
With the development of AI technology, intelligent agents are expected to team with humans and adapt to their teammates in changing environments, as effective human team members would do. As an initial step towards adaptive agents, the present study examined individual’s adaptive actions in a cooperative task. By analyzing the performance when participants paired with different partners, we were able to identify adaptations and isolate individual contributions to team performance. It is shown that the team performance is determined by factors at both individual and team levels. Using subjective similarity data collected on Amazon Mechanical Turk, we constructed high-dimensional embeddings of similarity distance between team trajectories. Results showed that team members who adapted most led to improved team performance. In current experiments we are extending our approach to examine the relation between teammate-likeness, sensitivity to social risk and performance in human-agent teams.
{"title":"Adaptation in Human-Autonomy Teamwork","authors":"K. Sycara, Dana Hughes, Huao Li, M. Lewis, Nina Lauharatanahirun","doi":"10.1109/ICHMS49158.2020.9209410","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209410","url":null,"abstract":"With the development of AI technology, intelligent agents are expected to team with humans and adapt to their teammates in changing environments, as effective human team members would do. As an initial step towards adaptive agents, the present study examined individual’s adaptive actions in a cooperative task. By analyzing the performance when participants paired with different partners, we were able to identify adaptations and isolate individual contributions to team performance. It is shown that the team performance is determined by factors at both individual and team levels. Using subjective similarity data collected on Amazon Mechanical Turk, we constructed high-dimensional embeddings of similarity distance between team trajectories. Results showed that team members who adapted most led to improved team performance. In current experiments we are extending our approach to examine the relation between teammate-likeness, sensitivity to social risk and performance in human-agent teams.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117053102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/ICHMS49158.2020.9209387
B. Sendhoff, H. Wersing
In this contribution, we outline our concept of cooperation between humans and intelligent systems which we denote as cooperative intelligence. We argue from a human perspective and emphasize the advantages of keeping the human in the loop rather than targeting autonomous systems. Our focus is respecting human values such as retaining competences, sharing experiences, and self-esteem. We discuss process-oriented requirements for intuitive cooperation like joint goals and shared intentions and social dimensions like empathy, relations, and trust. Finally, we suggest that cooperative intelligence can be facilitated by integrating interaction episodes across multiple system embodiments and instances, achieving the best holistic service with regard to personal preferences and needs.
{"title":"Cooperative Intelligence - A Humane Perspective","authors":"B. Sendhoff, H. Wersing","doi":"10.1109/ICHMS49158.2020.9209387","DOIUrl":"https://doi.org/10.1109/ICHMS49158.2020.9209387","url":null,"abstract":"In this contribution, we outline our concept of cooperation between humans and intelligent systems which we denote as cooperative intelligence. We argue from a human perspective and emphasize the advantages of keeping the human in the loop rather than targeting autonomous systems. Our focus is respecting human values such as retaining competences, sharing experiences, and self-esteem. We discuss process-oriented requirements for intuitive cooperation like joint goals and shared intentions and social dimensions like empathy, relations, and trust. Finally, we suggest that cooperative intelligence can be facilitated by integrating interaction episodes across multiple system embodiments and instances, achieving the best holistic service with regard to personal preferences and needs.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130635981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}