{"title":"面向虚拟环境的可重构多模态协同监控系统设计","authors":"Pierre Martin, P. Bourdot","doi":"10.1109/VR.2011.5759480","DOIUrl":null,"url":null,"abstract":"Virtual Reality (VR) systems cannot be promoted for complex applications (involving the interpretation of massive and intricate databases) without creating natural and ”transparent” user interfaces: intuitive interfaces are required to bring non-expert users to use VR technologies. Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. Our work focuses on the way to manage multimodal and collaborative interactions in a same process. We present here the similarities between these two processes and the main features of a reconfigurable multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of such a system is to ensure the merge of pieces of information coming from VR devices (tracking, gestures, speech, haptics, etc.), to control immersive multi-user applications using the main communication and sensorimotor channels of humans. The framework's architecture of this supervisor wants to be generic, modular and reconfigurable (via an XML configuration file), in order to be applied to many different contexts.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Designing a reconfigurable multimodal and collaborative supervisor for Virtual Environment\",\"authors\":\"Pierre Martin, P. Bourdot\",\"doi\":\"10.1109/VR.2011.5759480\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Virtual Reality (VR) systems cannot be promoted for complex applications (involving the interpretation of massive and intricate databases) without creating natural and ”transparent” user interfaces: intuitive interfaces are required to bring non-expert users to use VR technologies. Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. Our work focuses on the way to manage multimodal and collaborative interactions in a same process. We present here the similarities between these two processes and the main features of a reconfigurable multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of such a system is to ensure the merge of pieces of information coming from VR devices (tracking, gestures, speech, haptics, etc.), to control immersive multi-user applications using the main communication and sensorimotor channels of humans. The framework's architecture of this supervisor wants to be generic, modular and reconfigurable (via an XML configuration file), in order to be applied to many different contexts.\",\"PeriodicalId\":346701,\"journal\":{\"name\":\"2011 IEEE Virtual Reality Conference\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE Virtual Reality Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VR.2011.5759480\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE Virtual Reality Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2011.5759480","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Designing a reconfigurable multimodal and collaborative supervisor for Virtual Environment
Virtual Reality (VR) systems cannot be promoted for complex applications (involving the interpretation of massive and intricate databases) without creating natural and ”transparent” user interfaces: intuitive interfaces are required to bring non-expert users to use VR technologies. Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. Our work focuses on the way to manage multimodal and collaborative interactions in a same process. We present here the similarities between these two processes and the main features of a reconfigurable multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of such a system is to ensure the merge of pieces of information coming from VR devices (tracking, gestures, speech, haptics, etc.), to control immersive multi-user applications using the main communication and sensorimotor channels of humans. The framework's architecture of this supervisor wants to be generic, modular and reconfigurable (via an XML configuration file), in order to be applied to many different contexts.