首页 > 最新文献

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Inferring affective states from observation of a robot's simple movements 通过观察机器人的简单动作推断情感状态
Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi
This paper reports an analytic finding in which humans inferred the emotional states of a simple, flat robot that only moves autonomously on a floor in all directions based on Russell's circumplex model of affect that depends on human's spatial position. We observed the physical interaction between humans and a robot through an experiment where our participants seek a treasure in the given field, and the robot expresses its affective state by movements. This result will contribute to the basic design of HRI. The robot only showed its internal state using its simple movements.
这篇论文报告了一项分析发现,人类推断出一个简单的、扁平的机器人的情绪状态,这个机器人只能在地板上向各个方向自主移动,这是基于罗素的依赖于人类空间位置的情感循环模型。我们通过一个实验,观察人与机器人之间的物理互动,我们的参与者在给定的领域中寻找宝藏,机器人通过动作表达其情感状态。这一结果将有助于HRI的基本设计。机器人只通过简单的动作来显示其内部状态。
{"title":"Inferring affective states from observation of a robot's simple movements","authors":"Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1109/ROMAN.2015.7333582","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333582","url":null,"abstract":"This paper reports an analytic finding in which humans inferred the emotional states of a simple, flat robot that only moves autonomously on a floor in all directions based on Russell's circumplex model of affect that depends on human's spatial position. We observed the physical interaction between humans and a robot through an experiment where our participants seek a treasure in the given field, and the robot expresses its affective state by movements. This result will contribute to the basic design of HRI. The robot only showed its internal state using its simple movements.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124966321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A case study of an automatic volume control interface for a telepresence system 远程呈现系统自动音量控制界面的案例研究
Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai
The study of the telepresence robot as a tool for telecommunication from a remote location is attracting a considerable amount of attention. However, the problem arises that a telepresence robot system does not allow the volume of the user's utterance to be adjusted precisely, because it does not consider varying conditions in the sound environment, such as noise. In addition, when talking with several people in remote location, the user would like to be able to change the speaker volume freely according to the situation. In a previous study, a telepresence robot was proposed that has a function that automatically regulates the volume of the user's utterance. However, the manner in which the user exploits this function in a practical situation needs to be investigated. We propose a telepresence conversation robot system called “TeleCoBot.” TeleCoBot includes an operator's user interface, through which the volume of the user's utterance can be automatically regulated according to the distance between the robot and the conversation partner and the noise level in the robot's environment. We conducted a case study, in which the participants played a game using TeleCoBot's interface. The results of the study reveal the manner in which the participants used TeleCoBot and the additional factors that the system requires.
远程呈现机器人作为远程通信工具的研究引起了相当多的关注。然而,问题出现了,远程呈现机器人系统不允许用户说话的音量精确调整,因为它不考虑声音环境中的变化条件,如噪音。此外,当与几个人在远程位置交谈时,用户希望能够根据情况自由改变扬声器的音量。在之前的一项研究中,有人提出了一种远程呈现机器人,它具有自动调节用户说话音量的功能。但是,需要调查用户在实际情况中使用此功能的方式。我们提出了一种远程呈现对话机器人系统,叫做“TeleCoBot”。TeleCoBot包括一个操作员的用户界面,通过该界面,用户可以根据机器人与对话伙伴之间的距离以及机器人环境中的噪音水平自动调节用户说话的音量。我们进行了一个案例研究,参与者使用TeleCoBot的界面玩一个游戏。研究结果揭示了参与者使用远程机器人的方式和系统所需的其他因素。
{"title":"A case study of an automatic volume control interface for a telepresence system","authors":"Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai","doi":"10.1109/ROMAN.2015.7333605","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333605","url":null,"abstract":"The study of the telepresence robot as a tool for telecommunication from a remote location is attracting a considerable amount of attention. However, the problem arises that a telepresence robot system does not allow the volume of the user's utterance to be adjusted precisely, because it does not consider varying conditions in the sound environment, such as noise. In addition, when talking with several people in remote location, the user would like to be able to change the speaker volume freely according to the situation. In a previous study, a telepresence robot was proposed that has a function that automatically regulates the volume of the user's utterance. However, the manner in which the user exploits this function in a practical situation needs to be investigated. We propose a telepresence conversation robot system called “TeleCoBot.” TeleCoBot includes an operator's user interface, through which the volume of the user's utterance can be automatically regulated according to the distance between the robot and the conversation partner and the noise level in the robot's environment. We conducted a case study, in which the participants played a game using TeleCoBot's interface. The results of the study reveal the manner in which the participants used TeleCoBot and the additional factors that the system requires.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125884777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions 研究机器人的行为和对技术的态度对人类与机器人社会互动的影响
V. Nitsch, Thomas Glassen
Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.
许多人设想,在未来,个人服务机器人将与我们共享家园,参与我们的日常生活。这些机器人应该具有一定的“社交智能”,这样人们即使不渴望,也愿意与它们互动。在这一努力中,应用心理学家和机器人专家进行了大量的研究,以确定影响人类和机器人之间社会互动的因素,无论是积极的还是消极的。为了确定机器人行为和人对技术的态度对人机社会互动的影响程度,我们使用UG范式进行了一项实验,在该实验中,参与者(N=48)与机器人互动,机器人表现出活跃或冷漠的行为。结果表明,尽管与表现出动画行为的机器人的互动总体上更受欢迎,但人们可能会对这些机器人采取不同的行动,这取决于他们对技术能力的感知和对技术的热情。
{"title":"Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions","authors":"V. Nitsch, Thomas Glassen","doi":"10.1109/ROMAN.2015.7333560","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333560","url":null,"abstract":"Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126196399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Sequential intention estimation of a mobility aid user for intelligent navigational assistance 智能导航辅助移动辅助用户的顺序意图估计
Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto
This paper proposes an intelligent mobility aid framework aimed at mitigating the impact of cognitive and/or physical user deficiencies by performing suitable mobility assistance with minimum interference. To this end, a user action model using Gaussian Process Regression (GPR) is proposed to encapsulate the probabilistic and nonlinear relationships among user action, state of the environment and user intention. Moreover, exploiting the analytical tractability of the predictive distribution allows a sequential Bayesian process for user intention estimation to take place. The proposed scheme is validated on data obtained in an indoor setting with an instrumented robotic wheelchair augmented with sensorial feedback from the environment and user commands as well as proprioceptive information from the actual vehicle, achieving accuracy in near real-time of ~80%. The initial results are promising and indicating the suitability of the process to infer user driving behaviors within the context of ambulatory robots designed to provide assistance to users with mobility impairments while carrying out regular daily activities.
本文提出了一种智能移动辅助框架,旨在通过在最小干扰下执行适当的移动辅助来减轻认知和/或物理用户缺陷的影响。为此,提出了一种基于高斯过程回归(GPR)的用户行为模型,该模型封装了用户行为、环境状态和用户意图之间的概率和非线性关系。此外,利用预测分布的分析可追溯性,可以进行用户意图估计的顺序贝叶斯过程。该方案在室内环境中使用仪器化机器人轮椅获得的数据进行了验证,该轮椅增强了来自环境和用户命令的感官反馈以及来自实际车辆的本体感受信息,实现了接近实时的准确率~80%。最初的结果是有希望的,并且表明了在动态机器人的背景下推断用户驾驶行为的过程的适用性,该机器人旨在为行动不便的用户提供帮助,同时进行常规的日常活动。
{"title":"Sequential intention estimation of a mobility aid user for intelligent navigational assistance","authors":"Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto","doi":"10.1109/ROMAN.2015.7333580","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333580","url":null,"abstract":"This paper proposes an intelligent mobility aid framework aimed at mitigating the impact of cognitive and/or physical user deficiencies by performing suitable mobility assistance with minimum interference. To this end, a user action model using Gaussian Process Regression (GPR) is proposed to encapsulate the probabilistic and nonlinear relationships among user action, state of the environment and user intention. Moreover, exploiting the analytical tractability of the predictive distribution allows a sequential Bayesian process for user intention estimation to take place. The proposed scheme is validated on data obtained in an indoor setting with an instrumented robotic wheelchair augmented with sensorial feedback from the environment and user commands as well as proprioceptive information from the actual vehicle, achieving accuracy in near real-time of ~80%. The initial results are promising and indicating the suitability of the process to infer user driving behaviors within the context of ambulatory robots designed to provide assistance to users with mobility impairments while carrying out regular daily activities.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127220415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Talking-Ally: What is the future of robot's utterance generation? Talking-Ally:机器人语音生成的未来是什么?
Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada
It is still an enormous challenge within the HRI community to make a significant contribution to the development of a robot's utterance generation mechanism. How does one actually go about contributing and predicting the future of robot utterance generation? Since, our motivation to propose a robot's utterance generation approach by utilizing both addressivity and hearership. Novel platform of Talking-Ally is capable of producing an utterance (toward addressivity) by utilizing the state of the hearer's behaviors (eye-gaze information) to persuade the user (states of hearership) through dynamic interaction. Moreover, the robot has the potential to manipulate modality, turn-initial, and entrust behaviors to increase the liveliness of conversations, which are facilitated by shifting the direction of the conversation and maintaining the hearer's engagement in the conversation. Our experiment focuses on evaluating how interactive users engage with an utterance generation approach (performance) and the persuasive power of robot's communication within dynamic interactions.
在机器人话语生成机制的开发上做出重大贡献仍然是HRI领域面临的巨大挑战。一个人如何去贡献和预测机器人语音生成的未来?因此,我们的动机是通过利用寻址性和倾听性来提出机器人的话语生成方法。新型的Talking-Ally平台能够通过动态交互,利用听者的行为状态(目光信息)来说服用户(听者状态),从而产生一个话语(向寻址性)。此外,机器人还具有操纵情态、转向初始和委托行为的潜力,可以通过改变对话的方向和保持听者对对话的参与来促进对话的活跃度。我们的实验侧重于评估交互式用户如何使用话语生成方法(性能)以及机器人在动态交互中的通信说服力。
{"title":"Talking-Ally: What is the future of robot's utterance generation?","authors":"Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada","doi":"10.1109/ROMAN.2015.7333603","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333603","url":null,"abstract":"It is still an enormous challenge within the HRI community to make a significant contribution to the development of a robot's utterance generation mechanism. How does one actually go about contributing and predicting the future of robot utterance generation? Since, our motivation to propose a robot's utterance generation approach by utilizing both addressivity and hearership. Novel platform of Talking-Ally is capable of producing an utterance (toward addressivity) by utilizing the state of the hearer's behaviors (eye-gaze information) to persuade the user (states of hearership) through dynamic interaction. Moreover, the robot has the potential to manipulate modality, turn-initial, and entrust behaviors to increase the liveliness of conversations, which are facilitated by shifting the direction of the conversation and maintaining the hearer's engagement in the conversation. Our experiment focuses on evaluating how interactive users engage with an utterance generation approach (performance) and the persuasive power of robot's communication within dynamic interactions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114377969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robot watchfulness hinders learning performance 机器人的监视会阻碍学习表现
Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck
Educational technological applications, such as computerized learning environments and robot tutors, are often programmed to provide social cues for the purposes of facilitating natural interaction and enhancing productive outcomes. However, there can be potential costs to social interactions that could run counter to such goals. Here, we present an experiment testing the impact of a watchful versus non-watchful robot tutor on children's language-learning effort and performance. Across two interaction sessions, children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases. Results indicate better performance on the worksheets in the session in which the robot looked away from, as compared to the session it looked toward the child, as the child was filling in the worksheets. This was the case in particular for the more difficult worksheet items. These findings highlight the need for careful implementation of social robot behaviors to avoid counterproductive effects.
教育技术应用,如计算机化学习环境和机器人导师,通常被编程为提供社会线索,以促进自然互动和提高生产成果。然而,社交互动可能会有与这些目标背道而驰的潜在成本。在这里,我们提出了一项实验,测试观察与非观察机器人家教对儿童语言学习努力和表现的影响。在两个互动环节中,孩子们从机器人导师那里学习了法语和拉丁语规则,并填写了应用这些规则来翻译短语的工作表。结果表明,在孩子填写工作表时,机器人看向别处的那一组比看向孩子的那一组表现更好。对于较难的工作表项,情况尤其如此。这些发现强调了谨慎实施社交机器人行为的必要性,以避免适得其反的影响。
{"title":"Robot watchfulness hinders learning performance","authors":"Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck","doi":"10.1109/ROMAN.2015.7333620","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333620","url":null,"abstract":"Educational technological applications, such as computerized learning environments and robot tutors, are often programmed to provide social cues for the purposes of facilitating natural interaction and enhancing productive outcomes. However, there can be potential costs to social interactions that could run counter to such goals. Here, we present an experiment testing the impact of a watchful versus non-watchful robot tutor on children's language-learning effort and performance. Across two interaction sessions, children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases. Results indicate better performance on the worksheets in the session in which the robot looked away from, as compared to the session it looked toward the child, as the child was filling in the worksheets. This was the case in particular for the more difficult worksheet items. These findings highlight the need for careful implementation of social robot behaviors to avoid counterproductive effects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Effects of interaction and appearance on subjective impression of robots 交互和外观对机器人主观印象的影响
Keisuke Nonomura, K. Terada, A. Ito, S. Yamada
Human-interactive robots are assessed according to various factors, such as behavior, appearance, and quality of interaction. In the present study, we investigated the hypothesis that impressions of an unattractive robot will be improved by emotional interaction with physical touch with the robot. An experiment with human subjects confirmed that the evaluations of the intimacy factor of unattractive robots were improved after two minutes of physical and emotional interaction with such robots.
人机交互机器人是根据各种因素进行评估的,例如行为、外观和交互质量。在本研究中,我们调查了一个假设,即通过情感互动和与机器人的身体接触,对一个不吸引人的机器人的印象会得到改善。一项以人类为对象的实验证实,在与长相不佳的机器人进行两分钟的身体和情感互动后,对亲密度因素的评估有所提高。
{"title":"Effects of interaction and appearance on subjective impression of robots","authors":"Keisuke Nonomura, K. Terada, A. Ito, S. Yamada","doi":"10.1109/ROMAN.2015.7333577","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333577","url":null,"abstract":"Human-interactive robots are assessed according to various factors, such as behavior, appearance, and quality of interaction. In the present study, we investigated the hypothesis that impressions of an unattractive robot will be improved by emotional interaction with physical touch with the robot. An experiment with human subjects confirmed that the evaluations of the intimacy factor of unattractive robots were improved after two minutes of physical and emotional interaction with such robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127656715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot 典型儿童在互动机器人面前的有意识/无意识情感对话
I. Giannopulu, Tomio Watanabe
In the present interdisciplinary study, we have combined cognitive neuroscience knowledge, psychiatry and engineering knowledge with the aim to analyze emotion, language and un/consciousness in children aged 6 (n=20) and 9 (n=20) years via a listener-speaker communication. The speaker was always a child; the listener was a Human InterActor or a Robot InterActor, i.e.,. a small robot which reacts to speech expression by nodding only. Unconscious nonverbal emotional expression associated with physiological data (heart rate) as well as conscious process related to behavioral data (number of nouns and verbs in addition reported feelings) were considered. The results showed that 1) the heart rate was higher for children aged 6 years old than for children aged 9 years old when the InterActor was the robot; 2) the number of words (nouns and verbs) expressed by both age groups was higher when the InterActor was a human. It was lower for the children aged 6 years than for the children aged 9 years. Even if a difference of consciousness exists amongst the two groups, everything happens as if the InterActor Robot would allow children to elaborate a multivariate equation encoding and conceptualizing within their brain, and externalizing into unconscious nonverbal emotional behavior i.e., automatic activity. The Human InterActor would allow children to externalize the elaborated equation into conscious verbal behavior (words), i.e., controlled activity. Unconscious and conscious processes would not only depend on natural environments but also on artificial environments such as robots.
在本跨学科研究中,我们结合了认知神经科学知识、精神病学知识和工程学知识,旨在通过听者-说话者交流分析6岁(n=20)和9岁(n=20)儿童的情绪、语言和无意识/意识。说话的总是一个孩子;听众是人类交互者或机器人交互者,即。这是一种小型机器人,对语音表达只会点头。考虑了与生理数据(心率)相关的无意识非语言情感表达以及与行为数据(名词和动词的数量以及报告的感受)相关的有意识过程。结果表明:1)交互者为机器人时,6岁儿童的心率高于9岁儿童;2)当互动者为人类时,两个年龄组表达的单词(名词和动词)数量都更高。6岁的孩子比9岁的孩子要低。即使两组之间存在意识差异,互动机器人也会让孩子们在大脑中精心设计一个多元方程编码和概念化,并将其外化为无意识的非语言情感行为,即自动活动。人类互动者将允许儿童将精心设计的方程式外化为有意识的言语行为(单词),即受控活动。无意识和有意识的过程不仅依赖于自然环境,也依赖于人工环境,如机器人。
{"title":"Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot","authors":"I. Giannopulu, Tomio Watanabe","doi":"10.1109/ROMAN.2015.7333575","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333575","url":null,"abstract":"In the present interdisciplinary study, we have combined cognitive neuroscience knowledge, psychiatry and engineering knowledge with the aim to analyze emotion, language and un/consciousness in children aged 6 (n=20) and 9 (n=20) years via a listener-speaker communication. The speaker was always a child; the listener was a Human InterActor or a Robot InterActor, i.e.,. a small robot which reacts to speech expression by nodding only. Unconscious nonverbal emotional expression associated with physiological data (heart rate) as well as conscious process related to behavioral data (number of nouns and verbs in addition reported feelings) were considered. The results showed that 1) the heart rate was higher for children aged 6 years old than for children aged 9 years old when the InterActor was the robot; 2) the number of words (nouns and verbs) expressed by both age groups was higher when the InterActor was a human. It was lower for the children aged 6 years than for the children aged 9 years. Even if a difference of consciousness exists amongst the two groups, everything happens as if the InterActor Robot would allow children to elaborate a multivariate equation encoding and conceptualizing within their brain, and externalizing into unconscious nonverbal emotional behavior i.e., automatic activity. The Human InterActor would allow children to externalize the elaborated equation into conscious verbal behavior (words), i.e., controlled activity. Unconscious and conscious processes would not only depend on natural environments but also on artificial environments such as robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Constraints on freely chosen action for moral robots: Consciousness and control 道德机器人自由选择行为的约束:意识和控制
P. Bello, John Licato, S. Bringsjord
The protean word `autonomous' has gained broad currency as a descriptive adjective for AI research projects, robotic and otherwise. Depending upon context, `autonomous' at present connotes anything from a shallow, purely reactive system to a sophisticated cognitive architecture reflective of much of human cognition; hence the term fails to pick out any specific set of constitutive functionality. However, philosophers and ethicists have something relatively well-defined in mind when they talk about the idea of autonomy. For them, an autonomous agent is often by definition potentially morally responsible for its actions. Moreover, as a prerequisite to correct ascription of `autonomous,' a certain capacity to choose freely is assumed - even if this freedom is understood to be semi-constrained by societal conventions, moral norms, and the like.
“自主”这个千变万化的词,作为人工智能研究项目、机器人和其他项目的描述性形容词,已经得到了广泛的应用。根据上下文的不同,“自治”目前意味着从肤浅的、纯粹的反应系统到反映人类认知的复杂认知架构的任何东西;因此,这一术语不能挑出任何特定的本构功能集。然而,哲学家和伦理学家在谈论自治的概念时,脑子里有一些相对明确的东西。对他们来说,根据定义,自主主体通常对其行为负有潜在的道德责任。此外,作为正确定义“自主”的先决条件,某种自由选择的能力被假定——即使这种自由被理解为受到社会习俗、道德规范等方面的半约束。
{"title":"Constraints on freely chosen action for moral robots: Consciousness and control","authors":"P. Bello, John Licato, S. Bringsjord","doi":"10.1109/ROMAN.2015.7333654","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333654","url":null,"abstract":"The protean word `autonomous' has gained broad currency as a descriptive adjective for AI research projects, robotic and otherwise. Depending upon context, `autonomous' at present connotes anything from a shallow, purely reactive system to a sophisticated cognitive architecture reflective of much of human cognition; hence the term fails to pick out any specific set of constitutive functionality. However, philosophers and ethicists have something relatively well-defined in mind when they talk about the idea of autonomy. For them, an autonomous agent is often by definition potentially morally responsible for its actions. Moreover, as a prerequisite to correct ascription of `autonomous,' a certain capacity to choose freely is assumed - even if this freedom is understood to be semi-constrained by societal conventions, moral norms, and the like.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128813235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A novel 4 DOF eye-camera positioning system for Androids 一种新颖的4 DOF眼相机定位系统
Edgar Flores, S. Fels
We present a novel eye-camera positioning system with four degrees-of-freedom (DOF). The system has been designed to emulate human eye movements, including saccades, for anatomically accurate androids. The architecture of our system is similar to that of a universal joint in that a hollowed sphere (the eyeball), hosting a miniature CMOS color camera, takes the part of the cross shaft that connects a pair of hinges that are oriented at 90 degrees of each other. This concept allows the motors to remain static, enabling placing them in multiple configurations during the mechanical design stage facilitating the inclusion of other robotic parts into the robots head. Based on our evaluations, the robotic eye-camera has been shown to be suitable for perception experiments that require human-like eye motion.
提出了一种新颖的四自由度眼相机定位系统。该系统被设计为模仿人类的眼球运动,包括扫视,用于解剖学上精确的机器人。我们的系统架构类似于万向节,一个中空的球体(眼球),承载一个微型CMOS彩色相机,占据连接一对铰链的十字轴的一部分,铰链彼此成90度。这一概念允许电机保持静态,从而在机械设计阶段将其放置在多种配置中,从而便于将其他机器人部件包含在机器人头部中。根据我们的评估,机器人眼摄像头已被证明适合于需要类似人类眼球运动的感知实验。
{"title":"A novel 4 DOF eye-camera positioning system for Androids","authors":"Edgar Flores, S. Fels","doi":"10.1109/ROMAN.2015.7333608","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333608","url":null,"abstract":"We present a novel eye-camera positioning system with four degrees-of-freedom (DOF). The system has been designed to emulate human eye movements, including saccades, for anatomically accurate androids. The architecture of our system is similar to that of a universal joint in that a hollowed sphere (the eyeball), hosting a miniature CMOS color camera, takes the part of the cross shaft that connects a pair of hinges that are oriented at 90 degrees of each other. This concept allows the motors to remain static, enabling placing them in multiple configurations during the mechanical design stage facilitating the inclusion of other robotic parts into the robots head. Based on our evaluations, the robotic eye-camera has been shown to be suitable for perception experiments that require human-like eye motion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117208659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1