Keiko Ochi, Koji Inoue, Divesh Lala, Tatsuya Kawahara, Hirokazu Kumazaki
{"title":"Effect of attentive listening robot on pleasure and arousal change in psychiatric daycare","authors":"Keiko Ochi, Koji Inoue, Divesh Lala, Tatsuya Kawahara, Hirokazu Kumazaki","doi":"10.1080/01691864.2023.2257264","DOIUrl":null,"url":null,"abstract":"AbstractIn this paper, we investigate the usefulness of an attentive-listening robot in psychiatric daycare, an outpatient treatment program for the rehabilitation of psychiatric disorders. The robot was developed based on counseling techniques, such as repeating words that the user has said. It can also generate backchannels as a listening behavior during user utterance. Conversation experiments have been conducted to evaluate whether the robot can provide effective activities in this setting. The robot attentively listened to 18 daycare attendees talking about their recent memorable events for up to 3 min. The results showed that the conversation increased self-rated arousal. The impressions of the robot showed that talking with the robot was more conversable than with strangers and more useful as a talking partner than a friend. The subjects also had a positive impression about whether they would keep the robot in their homes. A linear regression analysis indicates that the frequency of the robot's assessment responses and backchannels positively affect pleasure improvement. The findings may pave the way for utilizing this kind of robots that people can talk to easily without hesitation or excessive consideration.Keywords: Communicative robotattentive listeningmental healthpsychiatric daycarespeech analysis AcknowledgmentsWe thank Dr. Yosuke Maeda and Ms. Sawa Maeda for their supervision of the experiments. We also thank Ms. Yumi Onishi, Ms. Reina Ōki and Mr. Hiroshi Notsu for the support to their experiments and insightful discussions based on their knowledge and experience in psychiatric daycare.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis study was supported by JSPS KAKENHI (19H05691) and JST Moonshot R&D Grant Number JPMJMS2011.Notes on contributorsKeiko OchiKeiko Ochi received PhD in Graduate School of Information Science and Technology, University of Tokyo, Japan. Currently, her research interests include assistive technology and speech signal processing.Koji InoueKoji Inoue received his MS and PhD degrees in informatics in 2015 and 2018 from Kyoto University, Japan. He is currently an assistant professor at Graduate School of Informatics, Kyoto University, and was a research fellow of the Japan Society for the Promotion of Science (JSPS) from 2015 to 2018. His research interests include spoken dialogue systems, speech signal processing, multimodal interaction, and conversational robots. He is a member of IEEE and ACM.Divesh LalaDivesh Lala received PhD in Graduate School of Informatics in 2015, from Kyoto University, Kyoto, Japan. Currently, he is a researcher in Graduate School of Informatics, Kyoto University. His research interests include human–robot interaction and multimodal signal processing.Tatsuya KawaharaTatsuya Kawahara received BE in 1987, ME in 1989, and PhD in 1995, all in information science, from Kyoto University, Kyoto, Japan. From 1995 to 1996, he was a visiting researcher at Bell Laboratories, Murray Hill, NJ, USA. Currently, he is a professor of School of Informatics, Kyoto University. From 2020 to 2023, he was Dean of the School. Before that, he was also an invited researcher at ATR and NICT. He has published more than 450 academic papers on automatic speech recognition, spoken language processing, and spoken dialogue systems. He has been conducting several projects including open-source speech recognition software Julius, the automatic transcription system deployed in the Japanese Parliament (Diet), and the autonomous android ERICA. Dr. Kawahara is President of APSIPA, Secretary General of ISCA, and Fellow of IEEE.Hirokazu KumazakiHirokazu Kumazaki received his degrees in medicine in 2015 from Keio University, Japan. He is currently a professor in Nagasaki University, School of Medicine. His research interests include psychiatry, developmental disorders, AI, sensory symptoms, and communication robots. He is a member of INSAR.","PeriodicalId":7261,"journal":{"name":"Advanced Robotics","volume":"12 1","pages":"0"},"PeriodicalIF":1.4000,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/01691864.2023.2257264","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
AbstractIn this paper, we investigate the usefulness of an attentive-listening robot in psychiatric daycare, an outpatient treatment program for the rehabilitation of psychiatric disorders. The robot was developed based on counseling techniques, such as repeating words that the user has said. It can also generate backchannels as a listening behavior during user utterance. Conversation experiments have been conducted to evaluate whether the robot can provide effective activities in this setting. The robot attentively listened to 18 daycare attendees talking about their recent memorable events for up to 3 min. The results showed that the conversation increased self-rated arousal. The impressions of the robot showed that talking with the robot was more conversable than with strangers and more useful as a talking partner than a friend. The subjects also had a positive impression about whether they would keep the robot in their homes. A linear regression analysis indicates that the frequency of the robot's assessment responses and backchannels positively affect pleasure improvement. The findings may pave the way for utilizing this kind of robots that people can talk to easily without hesitation or excessive consideration.Keywords: Communicative robotattentive listeningmental healthpsychiatric daycarespeech analysis AcknowledgmentsWe thank Dr. Yosuke Maeda and Ms. Sawa Maeda for their supervision of the experiments. We also thank Ms. Yumi Onishi, Ms. Reina Ōki and Mr. Hiroshi Notsu for the support to their experiments and insightful discussions based on their knowledge and experience in psychiatric daycare.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis study was supported by JSPS KAKENHI (19H05691) and JST Moonshot R&D Grant Number JPMJMS2011.Notes on contributorsKeiko OchiKeiko Ochi received PhD in Graduate School of Information Science and Technology, University of Tokyo, Japan. Currently, her research interests include assistive technology and speech signal processing.Koji InoueKoji Inoue received his MS and PhD degrees in informatics in 2015 and 2018 from Kyoto University, Japan. He is currently an assistant professor at Graduate School of Informatics, Kyoto University, and was a research fellow of the Japan Society for the Promotion of Science (JSPS) from 2015 to 2018. His research interests include spoken dialogue systems, speech signal processing, multimodal interaction, and conversational robots. He is a member of IEEE and ACM.Divesh LalaDivesh Lala received PhD in Graduate School of Informatics in 2015, from Kyoto University, Kyoto, Japan. Currently, he is a researcher in Graduate School of Informatics, Kyoto University. His research interests include human–robot interaction and multimodal signal processing.Tatsuya KawaharaTatsuya Kawahara received BE in 1987, ME in 1989, and PhD in 1995, all in information science, from Kyoto University, Kyoto, Japan. From 1995 to 1996, he was a visiting researcher at Bell Laboratories, Murray Hill, NJ, USA. Currently, he is a professor of School of Informatics, Kyoto University. From 2020 to 2023, he was Dean of the School. Before that, he was also an invited researcher at ATR and NICT. He has published more than 450 academic papers on automatic speech recognition, spoken language processing, and spoken dialogue systems. He has been conducting several projects including open-source speech recognition software Julius, the automatic transcription system deployed in the Japanese Parliament (Diet), and the autonomous android ERICA. Dr. Kawahara is President of APSIPA, Secretary General of ISCA, and Fellow of IEEE.Hirokazu KumazakiHirokazu Kumazaki received his degrees in medicine in 2015 from Keio University, Japan. He is currently a professor in Nagasaki University, School of Medicine. His research interests include psychiatry, developmental disorders, AI, sensory symptoms, and communication robots. He is a member of INSAR.
期刊介绍:
Advanced Robotics (AR) is the international journal of the Robotics Society of Japan and has a history of more than twenty years. It is an interdisciplinary journal which integrates publication of all aspects of research on robotics science and technology. Advanced Robotics publishes original research papers and survey papers from all over the world. Issues contain papers on analysis, theory, design, development, implementation and use of robots and robot technology. The journal covers both fundamental robotics and robotics related to applied fields such as service robotics, field robotics, medical robotics, rescue robotics, space robotics, underwater robotics, agriculture robotics, industrial robotics, and robots in emerging fields. It also covers aspects of social and managerial analysis and policy regarding robots.
Advanced Robotics (AR) is an international, ranked, peer-reviewed journal which publishes original research contributions to scientific knowledge.
All manuscript submissions are subject to initial appraisal by the Editor, and, if found suitable for further consideration, to peer review by independent, anonymous expert referees.