This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.
{"title":"Stretch to the Client; Re-imagining Interfaces","authors":"Kay N. Wojtowicz, M. E. Cabrera","doi":"10.1145/3568294.3580212","DOIUrl":"https://doi.org/10.1145/3568294.3580212","url":null,"abstract":"This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"5 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80542595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takuto Akiyoshi, H. Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Hirokazu Kato, M. Shiomi
One of the important roles of social robots is to support mental health through conversations with people. In this study, we focused on the column method to support cognitive restructuring, which is also used as one of the programs in psychiatric day care, and to help patients think flexibly and understand their own characteristics. To develop a robot that assists psychiatric day care patients in organizing their thoughts about their worries and goals through conversation, we designed the robot's conversation content based on the column method and implemented its autonomous conversation function. This paper reports on the preliminary experiments conducted to evaluate and improve the effectiveness of this prototype system in an actual psychiatric day care setting, and on the comments from participants in the experiments and day care staff.
{"title":"Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care","authors":"Takuto Akiyoshi, H. Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Hirokazu Kato, M. Shiomi","doi":"10.1145/3568294.3580150","DOIUrl":"https://doi.org/10.1145/3568294.3580150","url":null,"abstract":"One of the important roles of social robots is to support mental health through conversations with people. In this study, we focused on the column method to support cognitive restructuring, which is also used as one of the programs in psychiatric day care, and to help patients think flexibly and understand their own characteristics. To develop a robot that assists psychiatric day care patients in organizing their thoughts about their worries and goals through conversation, we designed the robot's conversation content based on the column method and implemented its autonomous conversation function. This paper reports on the preliminary experiments conducted to evaluate and improve the effectiveness of this prototype system in an actual psychiatric day care setting, and on the comments from participants in the experiments and day care staff.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89880500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David J. Porfirio, Allison Sauppé, M. Cakmak, Aws Albarghouthi, Bilge Mutlu
Demonstration is an effective end-user development paradigm for teaching robots how to perform new tasks. In this paper, we posit that demonstration is useful not only as a teaching tool, but also as a way to understand and assist end-user developers in thinking about a task at hand. As a first step toward gaining this understanding, we constructed a lightweight web interface to crowdsource step-by-step instructions of common household tasks, leveraging the imaginations and past experiences of potential end-user developers. As evidence of the utility of our interface, we deployed the interface on Amazon Mechanical Turk and collected 207 task traces that span 18 different task categories. We describe our vision for how these task traces can be operationalized as task models within end-user development tools and provide a roadmap for future work.
{"title":"Crowdsourcing Task Traces for Service Robotics","authors":"David J. Porfirio, Allison Sauppé, M. Cakmak, Aws Albarghouthi, Bilge Mutlu","doi":"10.1145/3568294.3580112","DOIUrl":"https://doi.org/10.1145/3568294.3580112","url":null,"abstract":"Demonstration is an effective end-user development paradigm for teaching robots how to perform new tasks. In this paper, we posit that demonstration is useful not only as a teaching tool, but also as a way to understand and assist end-user developers in thinking about a task at hand. As a first step toward gaining this understanding, we constructed a lightweight web interface to crowdsource step-by-step instructions of common household tasks, leveraging the imaginations and past experiences of potential end-user developers. As evidence of the utility of our interface, we deployed the interface on Amazon Mechanical Turk and collected 207 task traces that span 18 different task categories. We describe our vision for how these task traces can be operationalized as task models within end-user development tools and provide a roadmap for future work.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"36 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90257123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nadia Vanessa Garcia Hernandez, S. Buccelli, M. Laffranchi, L. D. De Michieli
Robotic rehabilitation devices are showing strong potential for intensive, task-oriented, and personalized motor training. Integrating Mixed Reality (MR) technology and tangible objects in these systems allow the creation of attractive, stimulating, and personalized hybrid environments. Using a gamification approach, MR-based robotic training can increase patients' motivation, engagement, and experience. This paper presents the development of two Mixed Reality-based exergames to perform bimanual exercises assisted by a shoulder rehabilitation exoskeleton and using tangible objects. The system design was completed by adopting a user-centered iterative process. The system evaluates task performance and cost function metrics from the kinematic analysis of the hands' movement. A preliminary evaluation of the system is presented, which shows the correct operation of the system and the fact that it stimulates the desired upper limb movements.
{"title":"Mixed Reality-based Exergames for Upper Limb Robotic Rehabilitation","authors":"Nadia Vanessa Garcia Hernandez, S. Buccelli, M. Laffranchi, L. D. De Michieli","doi":"10.1145/3568294.3580124","DOIUrl":"https://doi.org/10.1145/3568294.3580124","url":null,"abstract":"Robotic rehabilitation devices are showing strong potential for intensive, task-oriented, and personalized motor training. Integrating Mixed Reality (MR) technology and tangible objects in these systems allow the creation of attractive, stimulating, and personalized hybrid environments. Using a gamification approach, MR-based robotic training can increase patients' motivation, engagement, and experience. This paper presents the development of two Mixed Reality-based exergames to perform bimanual exercises assisted by a shoulder rehabilitation exoskeleton and using tangible objects. The system design was completed by adopting a user-centered iterative process. The system evaluates task performance and cost function metrics from the kinematic analysis of the hands' movement. A preliminary evaluation of the system is presented, which shows the correct operation of the system and the fact that it stimulates the desired upper limb movements.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86641984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heramb Nemlekar, N. Dhanaraj, Angelos Guan, S. Gupta, S. Nikolaidis
We focus on enabling robots to proactively assist humans in assembly tasks by adapting to their preferred sequence of actions. Much work on robot adaptation requires human demonstrations of the task. However, human demonstrations of real-world assemblies can be tedious and time-consuming. Thus, we propose learning human preferences from demonstrations in a shorter, canonical task to predict user actions in the actual assembly task. The proposed system uses the preference model learned from the canonical task as a prior and updates the model through interaction when predictions are inaccurate. We evaluate the proposed system in simulated assembly tasks and in a real-world human-robot assembly study and we show that both transferring the preference model from the canonical task, as well as updating the model online, contribute to improved accuracy in human action prediction. This enables the robot to proactively assist users, significantly reduce their idle time, and improve their experience working with the robot, compared to a reactive robot.
{"title":"Transfer Learning of Human Preferences for Proactive Robot Assistance in Assembly Tasks","authors":"Heramb Nemlekar, N. Dhanaraj, Angelos Guan, S. Gupta, S. Nikolaidis","doi":"10.1145/3568162.3576965","DOIUrl":"https://doi.org/10.1145/3568162.3576965","url":null,"abstract":"We focus on enabling robots to proactively assist humans in assembly tasks by adapting to their preferred sequence of actions. Much work on robot adaptation requires human demonstrations of the task. However, human demonstrations of real-world assemblies can be tedious and time-consuming. Thus, we propose learning human preferences from demonstrations in a shorter, canonical task to predict user actions in the actual assembly task. The proposed system uses the preference model learned from the canonical task as a prior and updates the model through interaction when predictions are inaccurate. We evaluate the proposed system in simulated assembly tasks and in a real-world human-robot assembly study and we show that both transferring the preference model from the canonical task, as well as updating the model online, contribute to improved accuracy in human action prediction. This enables the robot to proactively assist users, significantly reduce their idle time, and improve their experience working with the robot, compared to a reactive robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 2 Pt 1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89515596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan
Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.
人类倾向于使用各种非语言信号向他们的互动伙伴传达他们的信息。先前的研究利用这一渠道作为开发自动方法的基本线索,用于理解、建模和合成人机交互和人机交互设置中的个人行为。另一方面,在小组互动中,交流的一个重要方面是对话者之间社会信号的动态交换。本文介绍了LISI-HHI - Learning to imitation Social human - human Interaction,这是一个记录在各种交流场景中的二元人类交互行为的数据集。该数据集包含由高精度传感器同时捕获的多种模式,包括动作捕捉、RGB-D相机、眼动仪和麦克风。lis - hhi旨在成为HRI和多模态学习研究的基准,用于对社会互动环境中的内部和人际非语言信号进行建模,并研究如何将这些模型转移到社交机器人中。
{"title":"A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction","authors":"Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan","doi":"10.1145/3568294.3580080","DOIUrl":"https://doi.org/10.1145/3568294.3580080","url":null,"abstract":"Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"14 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82930853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seong Hee Lee, Nicholas Britten, Avram Block, A. Pandya, Malte F. Jung, Paul Schmitt
Lane changes of autonomous vehicles (AV) should not only succeed in making the maneuver but also provide a positive interaction experience for other drivers. As lane changes involve complex interactions, identification of a set of behaviors for autonomous vehicle lane change communication can be difficult to define. This study investigates different movements communicating AV lane change intent in order to identify which effectively communicates and positively affects other drivers' decisions. We utilized a virtual reality environment wherein 14 participants were each placed in the driver's seat of a car and experienced four different AV lane change signals. Our findings suggest that expressive lane change behaviors such as lateral movement have high levels of legibility at the cost of high perceived aggressiveness. We propose further investigation into how balancing key parameters of lateral movement can balance in legibility and aggressiveness that provide the best AV interaction experience for human drivers
{"title":"Coming In! Communicating Lane Change Intent in Autonomous Vehicles","authors":"Seong Hee Lee, Nicholas Britten, Avram Block, A. Pandya, Malte F. Jung, Paul Schmitt","doi":"10.1145/3568294.3580113","DOIUrl":"https://doi.org/10.1145/3568294.3580113","url":null,"abstract":"Lane changes of autonomous vehicles (AV) should not only succeed in making the maneuver but also provide a positive interaction experience for other drivers. As lane changes involve complex interactions, identification of a set of behaviors for autonomous vehicle lane change communication can be difficult to define. This study investigates different movements communicating AV lane change intent in order to identify which effectively communicates and positively affects other drivers' decisions. We utilized a virtual reality environment wherein 14 participants were each placed in the driver's seat of a car and experienced four different AV lane change signals. Our findings suggest that expressive lane change behaviors such as lateral movement have high levels of legibility at the cost of high perceived aggressiveness. We propose further investigation into how balancing key parameters of lateral movement can balance in legibility and aggressiveness that provide the best AV interaction experience for human drivers","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"37 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82987104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ching-Chih Tsao, Cheng-Yi Tang, Yu-Wen Chang, Y. Sung, S. Chien, Szu-Yin Lin
The present study examines the influences of a robot recommender system on human impulse buying tendency in online e-commerce contexts. An empirical user study was conducted, where different marketing strategies (limited quantity vs. discount rate) were applied to the products and intimate designs were utilized for the robotic agent. An electroencephalogram (EEG) headset was used to capture users' brain activities, which allowed us to investigate participants' real-time cognitive perceptions toward different experimental conditions (i.e., marketing plans and robotic agents). Our preliminary results reveal that marketing strategies and robot recommender applications can trigger impulsive buying behavior and contribute to different cognitive activities.
{"title":"The Influence of a Robot Recommender System on Impulse Buying Tendency","authors":"Ching-Chih Tsao, Cheng-Yi Tang, Yu-Wen Chang, Y. Sung, S. Chien, Szu-Yin Lin","doi":"10.1145/3568294.3580171","DOIUrl":"https://doi.org/10.1145/3568294.3580171","url":null,"abstract":"The present study examines the influences of a robot recommender system on human impulse buying tendency in online e-commerce contexts. An empirical user study was conducted, where different marketing strategies (limited quantity vs. discount rate) were applied to the products and intimate designs were utilized for the robotic agent. An electroencephalogram (EEG) headset was used to capture users' brain activities, which allowed us to investigate participants' real-time cognitive perceptions toward different experimental conditions (i.e., marketing plans and robotic agents). Our preliminary results reveal that marketing strategies and robot recommender applications can trigger impulsive buying behavior and contribute to different cognitive activities.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83102773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework for the formal evaluation of human teleoperator skill level in a systematic fashion, aiming to quantify how skillful a particular operator is for a well-defined task. Our proposed framework has two parts. First, the tasks used to evaluate skill levels are decomposed into a series of domain-specific primitives, each with a formal specification using signal temporal logic. Secondly, skill levels are automatically evaluated along multiple dimensions rather than a singular number. These dimensions include robustness, efficiency, resilience and readiness for each primitive task. We provide an initial evaluation for the task of taking-off, hovering, and landing in a drone simulator. This preliminary evaluation shows the value of a multi-dimensional evaluation of human operator performance.
{"title":"More Than a Number: A Multi-dimensional Framework For Automatically Assessing Human Teleoperation Skill","authors":"E. Jensen, Bradley Hayes, S. Sankaranarayanan","doi":"10.1145/3568294.3580167","DOIUrl":"https://doi.org/10.1145/3568294.3580167","url":null,"abstract":"We present a framework for the formal evaluation of human teleoperator skill level in a systematic fashion, aiming to quantify how skillful a particular operator is for a well-defined task. Our proposed framework has two parts. First, the tasks used to evaluate skill levels are decomposed into a series of domain-specific primitives, each with a formal specification using signal temporal logic. Secondly, skill levels are automatically evaluated along multiple dimensions rather than a singular number. These dimensions include robustness, efficiency, resilience and readiness for each primitive task. We provide an initial evaluation for the task of taking-off, hovering, and landing in a drone simulator. This preliminary evaluation shows the value of a multi-dimensional evaluation of human operator performance.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"2014 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88097724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As robots become increasingly prevalent in our communities, aligning the values motivating their behavior with human values is critical. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate values comprehensively, accurately, and in forms that are readily usable for robot planning. Misspecification can lead to undesired, inefficient, or even dangerous behavior. In the value alignment problem, humans and robots work together to optimize human objectives, which are often represented as reward functions and which the robot can infer by observing human actions. In existing alignment approaches, no explicit feedback about this inference process is provided to the human. In this paper, we introduce an exploratory framework to address this problem, which we call Transparent Value Alignment (TVA). TVA suggests that techniques from explainable AI (XAI) be explicitly applied to provide humans with information about the robot's beliefs throughout learning, enabling efficient and effective human feedback.
{"title":"Transparent Value Alignment","authors":"Lindsay M. Sanneman, J. Shah","doi":"10.1145/3568294.3580147","DOIUrl":"https://doi.org/10.1145/3568294.3580147","url":null,"abstract":"As robots become increasingly prevalent in our communities, aligning the values motivating their behavior with human values is critical. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate values comprehensively, accurately, and in forms that are readily usable for robot planning. Misspecification can lead to undesired, inefficient, or even dangerous behavior. In the value alignment problem, humans and robots work together to optimize human objectives, which are often represented as reward functions and which the robot can infer by observing human actions. In existing alignment approaches, no explicit feedback about this inference process is provided to the human. In this paper, we introduce an exploratory framework to address this problem, which we call Transparent Value Alignment (TVA). TVA suggests that techniques from explainable AI (XAI) be explicitly applied to provide humans with information about the robot's beliefs throughout learning, enabling efficient and effective human feedback.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"40 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73759657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}