This study addresses human-robot interactions in a controlled negotiation environment. The aim is to prove that a robot, given its limitations, can win a non-equilibrium based negotiation against a human by convincing him/her. To do so, a behavioral model based on decision trees is proposed, which chooses behavior and action of the robot adaptively depending on the circumstances, robot's intention and human's past response. An experiment under two conditions was conducted:one where the robot was set to play the Desert Survival Situation negotiation game against 10 humans; and one where the robot was compared to other system with the same knowledge about the game but without the behavioral and action generator model. The extracted conclusions were that the robot could win the game in most of the cases, convincing the human. The results also show that its performance is significantly better than the human's and that the other system's robot.
{"title":"Adaptive Behavior Generation for Conversational Robot in Human-Robot Negotiation Environment","authors":"M. Lopez, Komei Hasegawa, M. Imai","doi":"10.1145/3125739.3125741","DOIUrl":"https://doi.org/10.1145/3125739.3125741","url":null,"abstract":"This study addresses human-robot interactions in a controlled negotiation environment. The aim is to prove that a robot, given its limitations, can win a non-equilibrium based negotiation against a human by convincing him/her. To do so, a behavioral model based on decision trees is proposed, which chooses behavior and action of the robot adaptively depending on the circumstances, robot's intention and human's past response. An experiment under two conditions was conducted:one where the robot was set to play the Desert Survival Situation negotiation game against 10 humans; and one where the robot was compared to other system with the same knowledge about the game but without the behavioral and action generator model. The extracted conclusions were that the robot could win the game in most of the cases, convincing the human. The results also show that its performance is significantly better than the human's and that the other system's robot.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132167691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the important factors in medical and nursing care recently has been arousing the emotions of patients. Many types of communication robots and pet robots have been developed as communication partners for patients. The user's emotions are stimulated during direct communication with the robot, although there is less chance that the robot will approach the user in other situations such as watching TV and listening to music without disturbing him or her.The user feels that the robot is troublesome during the user's other tasks. The purpose of this research is to elevate the user's emotional experience through emotional expression by physiological expressions of a partner robot in the user's daily life. Ambient but emotional expressions of physiological phenomena are perceived by touch, even when the user is concentrating on other tasks. First, we focused on breathing, heartbeat and body temperature as the physiological phenomena. From the results of the evaluations of the robot's heartbeat and body temperature, along with our previous results for the breathing, each expression has arousal and pleasure axes of the robot's situation. In this paper, we focus on The joint attention of the robot and user to an emotional photograph, and we verified whether the strength of the user's own emotional response to the content was changed by the physiological expressions of the robot while they looked at photographs together. The results suggest that the physiological expression of the robot would make the common emotional experience of users the user's own emotion in the experience more excited and more relaxed.
{"title":"Physiological Expression of Robots Enhancing Users' Emotion in Direct and Indirect Communication","authors":"Naoto Yoshida, Tomoko Yonezawa","doi":"10.1145/3125739.3132609","DOIUrl":"https://doi.org/10.1145/3125739.3132609","url":null,"abstract":"One of the important factors in medical and nursing care recently has been arousing the emotions of patients. Many types of communication robots and pet robots have been developed as communication partners for patients. The user's emotions are stimulated during direct communication with the robot, although there is less chance that the robot will approach the user in other situations such as watching TV and listening to music without disturbing him or her.The user feels that the robot is troublesome during the user's other tasks. The purpose of this research is to elevate the user's emotional experience through emotional expression by physiological expressions of a partner robot in the user's daily life. Ambient but emotional expressions of physiological phenomena are perceived by touch, even when the user is concentrating on other tasks. First, we focused on breathing, heartbeat and body temperature as the physiological phenomena. From the results of the evaluations of the robot's heartbeat and body temperature, along with our previous results for the breathing, each expression has arousal and pleasure axes of the robot's situation. In this paper, we focus on The joint attention of the robot and user to an emotional photograph, and we verified whether the strength of the user's own emotional response to the content was changed by the physiological expressions of the robot while they looked at photographs together. The results suggest that the physiological expression of the robot would make the common emotional experience of users the user's own emotion in the experience more excited and more relaxed.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125158152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is possible to build highly accurate and mobile robots able to perceive their environment. Such robots can be used to assist humans in everyday tasks to reduce their workload. As a consequence, communication and interaction between humans and robots is becoming more important. In this paper we present a system for human-robot collaboration for on-table tasks. Due to its extensible design our system serves us as a base for further investigations in human-robot collaboration. We used the system to implement five action-selection strategies for the robot:proactive, autonomous, reactive, human-requested and human-commands. We conducted a pilot study to compare the interaction modes during a task in which the human and the robot build a bridge using blocks. The results of the pilot study indicate that for the simple bridge-building task, people prefer to interact with a robot using the proactive action-selection strategy. The completion of the pilot study indicates that the system is useful for human-robot collaboration studies. Several limitations have been identified that will be addressed in future developments.
{"title":"Building a Bridge with a Robot: A System for Collaborative On-table Task Execution","authors":"R. Schulz, Philipp Kratzer, Marc Toussaint","doi":"10.1145/3125739.3132606","DOIUrl":"https://doi.org/10.1145/3125739.3132606","url":null,"abstract":"It is possible to build highly accurate and mobile robots able to perceive their environment. Such robots can be used to assist humans in everyday tasks to reduce their workload. As a consequence, communication and interaction between humans and robots is becoming more important. In this paper we present a system for human-robot collaboration for on-table tasks. Due to its extensible design our system serves us as a base for further investigations in human-robot collaboration. We used the system to implement five action-selection strategies for the robot:proactive, autonomous, reactive, human-requested and human-commands. We conducted a pilot study to compare the interaction modes during a task in which the human and the robot build a bridge using blocks. The results of the pilot study indicate that for the simple bridge-building task, people prefer to interact with a robot using the proactive action-selection strategy. The completion of the pilot study indicates that the system is useful for human-robot collaboration studies. Several limitations have been identified that will be addressed in future developments.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129274444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A shared understanding of language will assist natural interactions between humans and artificial agents or robots undertaking collaborative tasks. An important domain for collaborative armed robots is interacting with humans and objects on a table, for example, picking, placing, or handing over a variety of objects. Such tasks combine object representation and movement planning in the geometric domain with abstract reasoning about symbolic spatial representations. This paper presents an initial study in which a human partner teaches the robot words for spatial relationships by providing exemplars and indicating where words may be used over the surface. This study demonstrates how robots can be taught the words required for these tasks in a quick and simple manner that allows the concepts to be generalizable over different surfaces, objects, and object placements.
{"title":"Collaborative Robots Learning Spatial Language for Picking and Placing Objects on a Table","authors":"R. Schulz","doi":"10.1145/3125739.3132579","DOIUrl":"https://doi.org/10.1145/3125739.3132579","url":null,"abstract":"A shared understanding of language will assist natural interactions between humans and artificial agents or robots undertaking collaborative tasks. An important domain for collaborative armed robots is interacting with humans and objects on a table, for example, picking, placing, or handing over a variety of objects. Such tasks combine object representation and movement planning in the geometric domain with abstract reasoning about symbolic spatial representations. This paper presents an initial study in which a human partner teaches the robot words for spatial relationships by providing exemplars and indicating where words may be used over the surface. This study demonstrates how robots can be taught the words required for these tasks in a quick and simple manner that allows the concepts to be generalizable over different surfaces, objects, and object placements.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129058132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this research, we propose a system that shows a group of multiple agents as other learners to the e-leaning user. For continuous learning in e-learning, promoting the user's motivation for learning and appropriate breaks are considered to improve the user's work efficiency. The proposed system controls the proportion of the agents' behaviors in their group by allocating the state of study--either studying or taking a break--for each agent in order to indirectly indicate concentration or appropriate breaks. Consequently, the system offers multiple actions, but the user has room to make the choice regarding the action. Depending on the concentration degree and the duration of study of the user, the necessity of a break is calculated. That is used to control the ratio of agents who have a break and to avoid the forced feeling resulting from only one action being presented by a single agent. In this paper, we examined how the ratio of behavior in the agent group influences the user. As a result, a greater ratio of studying to resting in the group influenced the participants' behaviors.
{"title":"Indirect Control of User's E-learning Motivation by Controlling Activity Ratio of Multiple Agents","authors":"Tomoko Yonezawa, Naoto Yoshida, Kaoru Maeda","doi":"10.1145/3125739.3125748","DOIUrl":"https://doi.org/10.1145/3125739.3125748","url":null,"abstract":"In this research, we propose a system that shows a group of multiple agents as other learners to the e-leaning user. For continuous learning in e-learning, promoting the user's motivation for learning and appropriate breaks are considered to improve the user's work efficiency. The proposed system controls the proportion of the agents' behaviors in their group by allocating the state of study--either studying or taking a break--for each agent in order to indirectly indicate concentration or appropriate breaks. Consequently, the system offers multiple actions, but the user has room to make the choice regarding the action. Depending on the concentration degree and the duration of study of the user, the necessity of a break is calculated. That is used to control the ratio of agents who have a break and to avoid the forced feeling resulting from only one action being presented by a single agent. In this paper, we examined how the ratio of behavior in the agent group influences the user. As a result, a greater ratio of studying to resting in the group influenced the participants' behaviors.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115263864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Quitter, A. Mostafa, D'Arcy Norman, André Miede, E. Sharlin, Patrick Finn
We are interested in the interactive aspects of deploying humanoid robots as instructors for industrial assembly tasks. Training for industrial assembly requires workers to become familiar with all steps of the assembly process, including learning and reproducing new tasks, before they can be employed in a production line. The derived challenges in current practice are limited availability of skilled instructors, and the need for attention to specific workers' training needs. In this paper, we propose the use of humanoid robots in teaching assembly tasks to workers while also providing a quality learning experience. We offer an assembly robotic instructor prototype based on a Baxter humanoid, and the results of a study conducted with the prototype teaching the assembly of a simple gearbox.
{"title":"Humanoid Robot Instructors for Industrial Assembly Tasks","authors":"Thomas Quitter, A. Mostafa, D'Arcy Norman, André Miede, E. Sharlin, Patrick Finn","doi":"10.1145/3125739.3125760","DOIUrl":"https://doi.org/10.1145/3125739.3125760","url":null,"abstract":"We are interested in the interactive aspects of deploying humanoid robots as instructors for industrial assembly tasks. Training for industrial assembly requires workers to become familiar with all steps of the assembly process, including learning and reproducing new tasks, before they can be employed in a production line. The derived challenges in current practice are limited availability of skilled instructors, and the need for attention to specific workers' training needs. In this paper, we propose the use of humanoid robots in teaching assembly tasks to workers while also providing a quality learning experience. We offer an assembly robotic instructor prototype based on a Baxter humanoid, and the results of a study conducted with the prototype teaching the assembly of a simple gearbox.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115770694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linda Hirsch, Anton Björsell, Mikael Laaksoharju, M. Obaid
Most currently existing tools for cognitive memory therapy require physical interaction or at least the presence of another person. The goal of this paper is to investigate whether a social robot might be an acceptable solution for a more inclusive therapy for people with memory disorder and severe physical limitations. Applying a user-centered design approach, we conducted semi-structured interviews with five healthcare professionals; four medical doctors and a psychologist, in three iterations followed by a focus group activity. An analysis of the collected data suggests several implications for design with an emphasis on embodiment, social skills, interaction, and memory training exercises.
{"title":"Investigating Design Implications Towards a Social Robot as a Memory Trainer","authors":"Linda Hirsch, Anton Björsell, Mikael Laaksoharju, M. Obaid","doi":"10.1145/3125739.3125755","DOIUrl":"https://doi.org/10.1145/3125739.3125755","url":null,"abstract":"Most currently existing tools for cognitive memory therapy require physical interaction or at least the presence of another person. The goal of this paper is to investigate whether a social robot might be an acceptable solution for a more inclusive therapy for people with memory disorder and severe physical limitations. Applying a user-centered design approach, we conducted semi-structured interviews with five healthcare professionals; four medical doctors and a psychologist, in three iterations followed by a focus group activity. An analysis of the collected data suggests several implications for design with an emphasis on embodiment, social skills, interaction, and memory training exercises.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128447726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nourhan Elfaramawy, Pablo V. A. Barros, G. I. Parisi, S. Wermter
The recognition of emotions plays an important role in our daily life and is essential for social communication. Although multiple studies have shown that body expressions can strongly convey emotional states, emotion recognition from body motion patterns has received less attention than the use of facial expressions. In this paper, we propose a self-organizing neural architecture that can effectively recognize affective states from full-body motion patterns. To evaluate our system, we designed and collected a data corpus named the Body Expressions of Emotion (BEE) dataset using a depth sensor in a human-robot interaction scenario. For our recordings, nineteen participants were asked to perform six different emotions:anger, fear, happiness, neutral, sadness, and surprise. In order to compare our system with human-like performance, we conducted an additional experiment by asking fifteen annotators to label depth map video sequences as one of the six emotion classes. The labeling results from human annotators were compared to the results predicted by our system. Experimental results showed that the recognition accuracy of the system was competitive with human performance when exposed to body motion patterns from the same dataset.
{"title":"Emotion Recognition from Body Expressions with a Neural Network Architecture","authors":"Nourhan Elfaramawy, Pablo V. A. Barros, G. I. Parisi, S. Wermter","doi":"10.1145/3125739.3125772","DOIUrl":"https://doi.org/10.1145/3125739.3125772","url":null,"abstract":"The recognition of emotions plays an important role in our daily life and is essential for social communication. Although multiple studies have shown that body expressions can strongly convey emotional states, emotion recognition from body motion patterns has received less attention than the use of facial expressions. In this paper, we propose a self-organizing neural architecture that can effectively recognize affective states from full-body motion patterns. To evaluate our system, we designed and collected a data corpus named the Body Expressions of Emotion (BEE) dataset using a depth sensor in a human-robot interaction scenario. For our recordings, nineteen participants were asked to perform six different emotions:anger, fear, happiness, neutral, sadness, and surprise. In order to compare our system with human-like performance, we conducted an additional experiment by asking fifteen annotators to label depth map video sequences as one of the six emotion classes. The labeling results from human annotators were compared to the results predicted by our system. Experimental results showed that the recognition accuracy of the system was competitive with human performance when exposed to body motion patterns from the same dataset.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131765032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Communication, which begins by a consignor who verifies the signal of the communication needs and a receiver who recognizes it, is one element in the construction of relationships with others. Expression through language is effective for beginning to communicate with others. However, such expressions might produce social risk, because language expressions are explicit signals that cannot often ignore the receiver's communication needs. For example, the fatigue caused by social networking is one social risk that obstructs the construction of amicable relations. Instead, people often communicate by nonverbal expressions that provide vague information to reduce such risks. In this study, we analyze physical interaction based on mutual communication needs and build a model that will establish more sustainable relationships. We investigated the adjustment of the necessary communication needs to make a model in this experiment. We tested in a situation where we coordinated the communication needs using a mannequin. In this experiment, participants approached it, went around it, and started communication from a position that showed more intimacy to a partner when the partner's communication needs are low.
{"title":"Investigation of Approach to Others for Modeling of Physical Interaction by Communication Needs","authors":"Genta Yoshioka, Yugo Takeuchi","doi":"10.1145/3125739.3125769","DOIUrl":"https://doi.org/10.1145/3125739.3125769","url":null,"abstract":"Communication, which begins by a consignor who verifies the signal of the communication needs and a receiver who recognizes it, is one element in the construction of relationships with others. Expression through language is effective for beginning to communicate with others. However, such expressions might produce social risk, because language expressions are explicit signals that cannot often ignore the receiver's communication needs. For example, the fatigue caused by social networking is one social risk that obstructs the construction of amicable relations. Instead, people often communicate by nonverbal expressions that provide vague information to reduce such risks. In this study, we analyze physical interaction based on mutual communication needs and build a model that will establish more sustainable relationships. We investigated the adjustment of the necessary communication needs to make a model in this experiment. We tested in a situation where we coordinated the communication needs using a mannequin. In this experiment, participants approached it, went around it, and started communication from a position that showed more intimacy to a partner when the partner's communication needs are low.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127507377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In interpersonal interactions, humans speak in part by considering their social distance and position with respect to other people, thereby developing relationships. In our research, we focus on positive politeness (PP), a strategy for positively reducing the distance people in human communication using language. In addition, we propose an agent that attempts to actively interact with humans. First, we design a dialog system based on the politeness theory. Next, we examine the effect of our proposed method on interactions. For our experiments, we implemented two agents:the method proposed for performing PP and a conventional method that performs negative politeness based on the unobjectionable behavior. We then compare and analyze impressions of experiment participants in response to the two agents. From our results, male participants accepted PP more frequently than female participants. Further, the proposed method lowered the perceived sense of interacting with a machine for male participants.
{"title":"Improving Relationships Based on Positive Politeness Between Humans and Life-Like Agents","authors":"T. Miyamoto, D. Katagami, Y. Shigemitsu","doi":"10.1145/3125739.3132585","DOIUrl":"https://doi.org/10.1145/3125739.3132585","url":null,"abstract":"In interpersonal interactions, humans speak in part by considering their social distance and position with respect to other people, thereby developing relationships. In our research, we focus on positive politeness (PP), a strategy for positively reducing the distance people in human communication using language. In addition, we propose an agent that attempts to actively interact with humans. First, we design a dialog system based on the politeness theory. Next, we examine the effect of our proposed method on interactions. For our experiments, we implemented two agents:the method proposed for performing PP and a conventional method that performs negative politeness based on the unobjectionable behavior. We then compare and analyze impressions of experiment participants in response to the two agents. From our results, male participants accepted PP more frequently than female participants. Further, the proposed method lowered the perceived sense of interacting with a machine for male participants.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115479191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}