D. Roberts, Norman Murray, C. Moore, Toby Duckworth
{"title":"DS-RT 2011教程:远程呈现人类","authors":"D. Roberts, Norman Murray, C. Moore, Toby Duckworth","doi":"10.1109/DS-RT.2011.39","DOIUrl":null,"url":null,"abstract":"Summary form only given. The complete presentation was not made available for publication as part of the conference proceedings. A grand challenge shared between computer science and communication technology is reproducing the face-to-face meeting across a distance. At present, we are some way from reproducing many of the semantics of a face-to-face meeting. Furthermore, while we can reproduce some in certain mediums and others in others, we are currently unable to reproduce most in any. For example while some mediums can show us what someone really looks like and others what or who they are really looking at, communicating both together has not yet been achieved to any reasonable quality across a reasonable distance. This tutorial begins by explaining some of the primary challenges in reproducing the face to face meeting and goes on to show how our research is examining both the problems and solutions. We compare the approaches of “telepresent” video conferencing, immersive virtual environments, and 3D video based tele-immersion. A central theme is the communication of appearance and attention. We explain why video conferencing can only faithfully reproduce the first, while virtual reality only the second, and how close free viewpoint 3D video is coming to doing both. We look at tracking technologies for driving avatars, ranging from from eye-trackers to the Kinect, and various ways of capturing people with multi-stream video and reproducing them in 3D video.","PeriodicalId":410884,"journal":{"name":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DS-RT 2011 Tutorial: Telepresent Humans\",\"authors\":\"D. Roberts, Norman Murray, C. Moore, Toby Duckworth\",\"doi\":\"10.1109/DS-RT.2011.39\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summary form only given. The complete presentation was not made available for publication as part of the conference proceedings. A grand challenge shared between computer science and communication technology is reproducing the face-to-face meeting across a distance. At present, we are some way from reproducing many of the semantics of a face-to-face meeting. Furthermore, while we can reproduce some in certain mediums and others in others, we are currently unable to reproduce most in any. For example while some mediums can show us what someone really looks like and others what or who they are really looking at, communicating both together has not yet been achieved to any reasonable quality across a reasonable distance. This tutorial begins by explaining some of the primary challenges in reproducing the face to face meeting and goes on to show how our research is examining both the problems and solutions. We compare the approaches of “telepresent” video conferencing, immersive virtual environments, and 3D video based tele-immersion. A central theme is the communication of appearance and attention. We explain why video conferencing can only faithfully reproduce the first, while virtual reality only the second, and how close free viewpoint 3D video is coming to doing both. We look at tracking technologies for driving avatars, ranging from from eye-trackers to the Kinect, and various ways of capturing people with multi-stream video and reproducing them in 3D video.\",\"PeriodicalId\":410884,\"journal\":{\"name\":\"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DS-RT.2011.39\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DS-RT.2011.39","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Summary form only given. The complete presentation was not made available for publication as part of the conference proceedings. A grand challenge shared between computer science and communication technology is reproducing the face-to-face meeting across a distance. At present, we are some way from reproducing many of the semantics of a face-to-face meeting. Furthermore, while we can reproduce some in certain mediums and others in others, we are currently unable to reproduce most in any. For example while some mediums can show us what someone really looks like and others what or who they are really looking at, communicating both together has not yet been achieved to any reasonable quality across a reasonable distance. This tutorial begins by explaining some of the primary challenges in reproducing the face to face meeting and goes on to show how our research is examining both the problems and solutions. We compare the approaches of “telepresent” video conferencing, immersive virtual environments, and 3D video based tele-immersion. A central theme is the communication of appearance and attention. We explain why video conferencing can only faithfully reproduce the first, while virtual reality only the second, and how close free viewpoint 3D video is coming to doing both. We look at tracking technologies for driving avatars, ranging from from eye-trackers to the Kinect, and various ways of capturing people with multi-stream video and reproducing them in 3D video.