{"title":"与虚拟人进行全身交互的具身设计","authors":"M. Gillies, Harry Brenton, A. Kleinsmith","doi":"10.1145/2790994.2790996","DOIUrl":null,"url":null,"abstract":"This paper presents a system that allows end users to design full body interactions with 3D animated virtual character through a process we call Interactive Performance Capture. This process is embodied in the sense that users design directly by moving and interacting using an interactive machine learning method. Two people improvise an interaction based only on their movements, one plays the part of the virtual character the other plays a real person. Their movements are recorded and they label it with metadata that identifies certain actions and responses. This labeled data is then used to train a Gaussian Mixture Model that is able to recognize new actions and generate suitable responses from the virtual character. A small study showed that users do indeed design in a very embodied way using movement directly as a means of thinking through and designing interactions.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Embodied design of full bodied interaction with virtual humans\",\"authors\":\"M. Gillies, Harry Brenton, A. Kleinsmith\",\"doi\":\"10.1145/2790994.2790996\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a system that allows end users to design full body interactions with 3D animated virtual character through a process we call Interactive Performance Capture. This process is embodied in the sense that users design directly by moving and interacting using an interactive machine learning method. Two people improvise an interaction based only on their movements, one plays the part of the virtual character the other plays a real person. Their movements are recorded and they label it with metadata that identifies certain actions and responses. This labeled data is then used to train a Gaussian Mixture Model that is able to recognize new actions and generate suitable responses from the virtual character. A small study showed that users do indeed design in a very embodied way using movement directly as a means of thinking through and designing interactions.\",\"PeriodicalId\":272811,\"journal\":{\"name\":\"Proceedings of the 2nd International Workshop on Movement and Computing\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2nd International Workshop on Movement and Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2790994.2790996\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Workshop on Movement and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2790994.2790996","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Embodied design of full bodied interaction with virtual humans
This paper presents a system that allows end users to design full body interactions with 3D animated virtual character through a process we call Interactive Performance Capture. This process is embodied in the sense that users design directly by moving and interacting using an interactive machine learning method. Two people improvise an interaction based only on their movements, one plays the part of the virtual character the other plays a real person. Their movements are recorded and they label it with metadata that identifies certain actions and responses. This labeled data is then used to train a Gaussian Mixture Model that is able to recognize new actions and generate suitable responses from the virtual character. A small study showed that users do indeed design in a very embodied way using movement directly as a means of thinking through and designing interactions.