H. Otaka, Ai Nieda, Naruhito Toyoda, Megumi Tasaki, Ryo Takatama, D. Kuwabara, Masashi Sakamoto
Facial color and texture make the impressions of facial look and attractiveness (e.g. gorgeous, sophisticated and warm-hearted). These impressions can be affected by facial makeups, including face foundation, lip-makeup, eye-makeup, eyebrow-makeup, and cheek-makeup. Face Foundation changes facial skin textures and adjusts facial skin tones. Lip-makeup changes lip colors and textures. However, it is difficult to figure out the detail of makeup impression clearly, because the meaning of language using in the questionnaire depends on the customer's culture, lifestyle, or country. In addition, the questionnaire cannot measure the elements such as color, radiance and the shapes though these elements have an influence on makeup preference. Therefore, in our previous study, we developed the eyelash makeup design system by using computer graphics for quantitative interpretation of the makeup impression. However, it is not well understood which types of color and texture in specific face parts correspond to each impression of face attractiveness. We aim to understand the corresponding facial impressions and manipulate them as you like, by makeup. .In the present study, using MAYA, we first create a CG image of average face shape as an original image. We next manipulate the original image to create 9 images with various combinations of makeups, including foundation, lip-makeup, eye-makeup, eyebrow, and cheek; each of 9 images is intended to make one specific impression. We evaluate whether these images' actual visual impressions on people correspond to our intended impressions of attractiveness.
{"title":"CG aided makeup design to understand and manipulate the impression of facial look and attractiveness","authors":"H. Otaka, Ai Nieda, Naruhito Toyoda, Megumi Tasaki, Ryo Takatama, D. Kuwabara, Masashi Sakamoto","doi":"10.1145/2804408.2814181","DOIUrl":"https://doi.org/10.1145/2804408.2814181","url":null,"abstract":"Facial color and texture make the impressions of facial look and attractiveness (e.g. gorgeous, sophisticated and warm-hearted). These impressions can be affected by facial makeups, including face foundation, lip-makeup, eye-makeup, eyebrow-makeup, and cheek-makeup. Face Foundation changes facial skin textures and adjusts facial skin tones. Lip-makeup changes lip colors and textures. However, it is difficult to figure out the detail of makeup impression clearly, because the meaning of language using in the questionnaire depends on the customer's culture, lifestyle, or country. In addition, the questionnaire cannot measure the elements such as color, radiance and the shapes though these elements have an influence on makeup preference. Therefore, in our previous study, we developed the eyelash makeup design system by using computer graphics for quantitative interpretation of the makeup impression. However, it is not well understood which types of color and texture in specific face parts correspond to each impression of face attractiveness. We aim to understand the corresponding facial impressions and manipulate them as you like, by makeup. .In the present study, using MAYA, we first create a CG image of average face shape as an original image. We next manipulate the original image to create 9 images with various combinations of makeups, including foundation, lip-makeup, eye-makeup, eyebrow, and cheek; each of 9 images is intended to make one specific impression. We evaluate whether these images' actual visual impressions on people correspond to our intended impressions of attractiveness.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128231227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah H. Creem-Regehr, Jeanine K. Stefanucci, W. Thompson, N. Nash, Michael McCardell
Perceiving an accurate sense of absolute scale is important for the utility of virtual environments (VEs). Research shows that absolute egocentric distances are underestimated in VEs compared to the same judgments made in the real world, but there are inconsistencies in the amount of underestimation. We examined two possible factors in the variation in the magnitude of distance underestimation. We compared egocentric distance judgments in a high-cost (NVIS SX60) and low-cost (Oculus Rift DK2) HMD using both indoor and outdoor highly-realistic virtual models. Performance more accurately matched the intended distance in the Oculus compared to the NVIS, and regardless of the HMD, distances were underestimated more in the outdoor versus the indoor VE. These results suggest promise in future use of consumer-level wide field-of-view HMDs for space perception research and applications, and the importance of considering the context of the environment as a factor in the perception of absolute scale within VEs.
{"title":"Egocentric distance perception in the Oculus Rift (DK2)","authors":"Sarah H. Creem-Regehr, Jeanine K. Stefanucci, W. Thompson, N. Nash, Michael McCardell","doi":"10.1145/2804408.2804422","DOIUrl":"https://doi.org/10.1145/2804408.2804422","url":null,"abstract":"Perceiving an accurate sense of absolute scale is important for the utility of virtual environments (VEs). Research shows that absolute egocentric distances are underestimated in VEs compared to the same judgments made in the real world, but there are inconsistencies in the amount of underestimation. We examined two possible factors in the variation in the magnitude of distance underestimation. We compared egocentric distance judgments in a high-cost (NVIS SX60) and low-cost (Oculus Rift DK2) HMD using both indoor and outdoor highly-realistic virtual models. Performance more accurately matched the intended distance in the Oculus compared to the NVIS, and regardless of the HMD, distances were underestimated more in the outdoor versus the indoor VE. These results suggest promise in future use of consumer-level wide field-of-view HMDs for space perception research and applications, and the importance of considering the context of the environment as a factor in the perception of absolute scale within VEs.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129379316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the elusive term "Uncanny Valley", research in the area of appealing virtual humans approaching realism continues. The theory suggests that characters lose appeal when they approach photorealism (e.g., [MacDorman et al. 2009]). Realistic virtual characters are judged harshly, since the human visual system has acquired more expertise with the featural restrictions of other humans than with the restrictions of artificial characters [Seyama and Nagayama 2007]. Stylisation (making the character's appearance abstract) is therefore often used to avoid virtual characters to be perceived as unpleasant. We designed an experiment to test if there is a general affinity towards abstract as oppose to realistic characters.
尽管“恐怖谷”这个词难以捉摸,但在吸引虚拟人接近现实主义领域的研究仍在继续。该理论认为,当角色接近照片现实主义时,他们会失去吸引力(例如,[MacDorman et al. 2009])。现实的虚拟角色被严厉地评判,因为人类视觉系统已经获得了更多的专业知识,与其他人类的特征限制相比,人造角色的限制[Seyama和Nagayama 2007]。因此,程式化(使角色的外观抽象)通常用于避免虚拟角色被认为不愉快。我们设计了一个实验来测试人们是否更喜欢抽象角色而不是现实角色。
{"title":"Evaluating the Uncanny valley with the implicit association test","authors":"Katja Zibrek, R. Mcdonnell","doi":"10.1145/2804408.2814179","DOIUrl":"https://doi.org/10.1145/2804408.2814179","url":null,"abstract":"Despite the elusive term \"Uncanny Valley\", research in the area of appealing virtual humans approaching realism continues. The theory suggests that characters lose appeal when they approach photorealism (e.g., [MacDorman et al. 2009]). Realistic virtual characters are judged harshly, since the human visual system has acquired more expertise with the featural restrictions of other humans than with the restrictions of artificial characters [Seyama and Nagayama 2007]. Stylisation (making the character's appearance abstract) is therefore often used to avoid virtual characters to be perceived as unpleasant. We designed an experiment to test if there is a general affinity towards abstract as oppose to realistic characters.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117193541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality headsets and immersive head-mounted displays have become commonplace and have found their applications in digital gaming, film and education. An immersive perception is created by surrounding the user of the VR system with photo-realisitic scenes, sound or other stimuli (e.g. haptic) that provide an engrossing experience to the viewer. The ability to interact with the objects in the virtual environment have added greater interest for its use in learning and education. In this proposed work we plan to explore the ability to subtly guide viewers' attention to important regions in a controlled 3D virtual scene. Subtle gaze guidance [Bailey et al. 2009] approach combines eye-tracking and subtle imagespace modulations to guide viewer's attention about a scene. These modulations are terminated before the viewer can fixate on them using their high acuity foveal vision. This approach is preferred over other overt techniques that make permanent changes to the scene being viewed. This approach has also been tested in controlled realworld environments [Booth et al. 2013]. The key challenge to such a system, is the need for an external projector to present modulations on the scene objects to guide viewer's attention. However a VR system enables the user to view and interact in a 3D scene that is close to reality, thereby allowing researchers to digitally manipulate the 3D scene for active gaze guidance.
虚拟现实耳机和沉浸式头戴式显示器已经变得司空见惯,并在数字游戏、电影和教育中得到了应用。沉浸式感知是通过围绕VR系统的用户使用逼真的场景,声音或其他刺激(例如触觉)来创建的,这些刺激为观看者提供了引人入胜的体验。与虚拟环境中的对象进行交互的能力为其在学习和教育中的应用增加了更大的兴趣。在这项工作中,我们计划探索在受控的3D虚拟场景中巧妙地引导观众注意力到重要区域的能力。微妙凝视引导[Bailey et al. 2009]方法结合了眼球追踪和微妙的图像空间调制来引导观看者对场景的注意力。这些调制终止之前,观众可以使用他们的高灵敏度中央凹视觉固定在他们。这种方法比其他公开的技术更受欢迎,这些技术会对被观看的场景进行永久性的改变。这种方法也在受控的现实环境中进行了测试[Booth et al. 2013]。这种系统的关键挑战是需要一个外部投影仪来呈现场景对象的调制以引导观众的注意力。然而,VR系统使用户能够在接近现实的3D场景中查看和交互,从而允许研究人员以数字方式操纵3D场景以进行主动注视引导。
{"title":"Depth-based subtle gaze guidance in virtual reality environments","authors":"S. Sridharan, James Pieszala, Reynold J. Bailey","doi":"10.1145/2804408.2814187","DOIUrl":"https://doi.org/10.1145/2804408.2814187","url":null,"abstract":"Virtual reality headsets and immersive head-mounted displays have become commonplace and have found their applications in digital gaming, film and education. An immersive perception is created by surrounding the user of the VR system with photo-realisitic scenes, sound or other stimuli (e.g. haptic) that provide an engrossing experience to the viewer. The ability to interact with the objects in the virtual environment have added greater interest for its use in learning and education. In this proposed work we plan to explore the ability to subtly guide viewers' attention to important regions in a controlled 3D virtual scene. Subtle gaze guidance [Bailey et al. 2009] approach combines eye-tracking and subtle imagespace modulations to guide viewer's attention about a scene. These modulations are terminated before the viewer can fixate on them using their high acuity foveal vision. This approach is preferred over other overt techniques that make permanent changes to the scene being viewed. This approach has also been tested in controlled realworld environments [Booth et al. 2013]. The key challenge to such a system, is the need for an external projector to present modulations on the scene objects to guide viewer's attention. However a VR system enables the user to view and interact in a 3D scene that is close to reality, thereby allowing researchers to digitally manipulate the 3D scene for active gaze guidance.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115133419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animated digital self-representations of the user in an immersive virtual environment, a self-avatar, have been shown to aid in perceptual judgments in the virtual environment and to provide critical information for people deciding what actions they can and cannot take. In this paper we explore whether the form of the self-avatar is important in providing this information. In particular, we vary the form of a self-avatar between having no self-avatar, a simple line-based skeleton avatar, or a full-body, gender-matched self-avatar and examine whether the form of the self-avatar affects peoples judgments in whether they could or could not step off of a virtual ledge. Our results replicate prior work that shows that having a self-avatar provides critical information for this judgment, but finds no difference in the form of the self-avatar having an effect on the judgment.
{"title":"The effect of avatar model in stepping off a ledge in an immersive virtual environment","authors":"Bobby Bodenheimer, Qiang Fu","doi":"10.1145/2804408.2804426","DOIUrl":"https://doi.org/10.1145/2804408.2804426","url":null,"abstract":"Animated digital self-representations of the user in an immersive virtual environment, a self-avatar, have been shown to aid in perceptual judgments in the virtual environment and to provide critical information for people deciding what actions they can and cannot take. In this paper we explore whether the form of the self-avatar is important in providing this information. In particular, we vary the form of a self-avatar between having no self-avatar, a simple line-based skeleton avatar, or a full-body, gender-matched self-avatar and examine whether the form of the self-avatar affects peoples judgments in whether they could or could not step off of a virtual ledge. Our results replicate prior work that shows that having a self-avatar provides critical information for this judgment, but finds no difference in the form of the self-avatar having an effect on the judgment.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115514975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo Martín, Julian Iseringhausen, Michael Weinmann, M. Hullin
The human ability to perceive materials and their properties is a very intricate multisensory skill and as such not only an intriguing research subject, but also an immense challenge when creating realistic virtual presentations of materials. In this paper, our goal is to learn about how the visual and auditory channels contribute to our perception of characteristic material parameters. At the center of our work are two psychophysical experiments performed on tablet computers, where the subjects rated a set of perceptual material qualities under different stimuli. The first experiment covers a full collection of materials in different presentations (visual, auditory and audio-visual). As a point of reference, subjects also performed all ratings on physical material samples. A key result of this experiment is that auditory cues strongly benefit the perception of certain qualities that are of a tactile nature (like "hard--soft", "rough--smooth"). The follow-up experiment demonstrates that, to a certain extent, audio cues can also be transferred to other materials, exaggerating or attenuating some of their perceived qualities. From these results, we conclude that a multimodal approach, and in particular the inclusion of sound, can greatly enhance the digital communication of material properties.
{"title":"Multimodal perception of material properties","authors":"Rodrigo Martín, Julian Iseringhausen, Michael Weinmann, M. Hullin","doi":"10.1145/2804408.2804420","DOIUrl":"https://doi.org/10.1145/2804408.2804420","url":null,"abstract":"The human ability to perceive materials and their properties is a very intricate multisensory skill and as such not only an intriguing research subject, but also an immense challenge when creating realistic virtual presentations of materials. In this paper, our goal is to learn about how the visual and auditory channels contribute to our perception of characteristic material parameters. At the center of our work are two psychophysical experiments performed on tablet computers, where the subjects rated a set of perceptual material qualities under different stimuli. The first experiment covers a full collection of materials in different presentations (visual, auditory and audio-visual). As a point of reference, subjects also performed all ratings on physical material samples. A key result of this experiment is that auditory cues strongly benefit the perception of certain qualities that are of a tactile nature (like \"hard--soft\", \"rough--smooth\"). The follow-up experiment demonstrates that, to a certain extent, audio cues can also be transferred to other materials, exaggerating or attenuating some of their perceived qualities. From these results, we conclude that a multimodal approach, and in particular the inclusion of sound, can greatly enhance the digital communication of material properties.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130345227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distance perception is important for many virtual reality applications, and numerous studies have found underestimated egocentric distances in head-mounted display (HMD) based virtual environments. Applying minification to imagery displayed in HMDs is a method that can reduce or eliminate the underestimation [Kuhl et al. 2009; Zhang et al. 2012]. In a previous study, we measured distance judgments with direct blind walking through an Oculus Rift DK1 HMD and found that participants judged distance accurately in a calibrated condition, and minification caused subjects to overestimate distances [Li et al. 2014]. This article describes two experiments built on the previous study to examine distance judgments and minification with the Oculus Rift DK2 HMD (Experiment 1), and in the real world with a simulated HMD (Experiment 2). From the results, we found statistically significant distance underestimation with the DK2, but the judgments were more accurate than results typically reported in HMD studies. In addition, we discovered that participants made similar distance judgments with the DK2 and the simulated HMD. Finally, we found for the first time that minification had a similar impact on distance judgments in both virtual and real-world environments.
距离感知对于许多虚拟现实应用非常重要,许多研究发现在基于头戴式显示器(HMD)的虚拟环境中低估了自我中心距离。对显示在头显中的图像应用最小化是一种可以减少或消除低估的方法[Kuhl et al. 2009;Zhang et al. 2012]。在之前的一项研究中,我们通过Oculus Rift DK1 HMD直接盲走来测量距离判断,发现参与者在校准条件下准确判断距离,缩小导致受试者高估距离[Li et al. 2014]。本文描述了在先前研究的基础上建立的两个实验,以检查Oculus Rift DK2 HMD(实验1)和现实世界中模拟HMD(实验2)的距离判断和缩小。从结果中,我们发现DK2的距离低估在统计上显着,但判断比HMD研究中通常报告的结果更准确。此外,我们发现参与者在使用DK2和模拟HMD时做出了相似的距离判断。最后,我们首次发现,在虚拟和现实环境中,缩小对距离判断有相似的影响。
{"title":"The effects of minification and display field of view on distance judgments in real and HMD-based environments","authors":"Bochao Li, Ruimin Zhang, A. Nordman, S. Kuhl","doi":"10.1145/2804408.2804427","DOIUrl":"https://doi.org/10.1145/2804408.2804427","url":null,"abstract":"Distance perception is important for many virtual reality applications, and numerous studies have found underestimated egocentric distances in head-mounted display (HMD) based virtual environments. Applying minification to imagery displayed in HMDs is a method that can reduce or eliminate the underestimation [Kuhl et al. 2009; Zhang et al. 2012]. In a previous study, we measured distance judgments with direct blind walking through an Oculus Rift DK1 HMD and found that participants judged distance accurately in a calibrated condition, and minification caused subjects to overestimate distances [Li et al. 2014]. This article describes two experiments built on the previous study to examine distance judgments and minification with the Oculus Rift DK2 HMD (Experiment 1), and in the real world with a simulated HMD (Experiment 2). From the results, we found statistically significant distance underestimation with the DK2, but the judgments were more accurate than results typically reported in HMD studies. In addition, we discovered that participants made similar distance judgments with the DK2 and the simulated HMD. Finally, we found for the first time that minification had a similar impact on distance judgments in both virtual and real-world environments.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122424184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have developed an interactive 4-D visualization system that employed the principal vanishing points operation as a method to control the movement of the eye-point and the change in the viewing direction in 4-D space. Different from conventional 4-D visualization and interaction techniques, the system can provide intuitive observation of 4-D space and objects by projecting them onto 3D space in real time from various positions and directions in 4-D space. Our next challenge is to examine whether humans are able to develop a spatial perception of 4-D space and objects through 4-D experiences provided by the system. In this paper, as the first step toward our aim, we assessed whether participants were able to get intuitive spatial understanding of 4-D objects. In the evaluation experiment, firstly, the participants learned a structure of a hypercube. Then, we evaluated their spatial perception developed in the learning period by tasks of controlling the 4-D eye-point and reconstructing the hypercube from a set of its 3-D projection drawings. The results indicated evidence for that humans were able to get 4-D spatial perception by operating the system.
{"title":"4-D spatial perception established through hypercube recognition tasks using interactive visualization system with 3-D screen","authors":"Takanobu Miwa, Yukihito Sakai, S. Hashimoto","doi":"10.1145/2804408.2804417","DOIUrl":"https://doi.org/10.1145/2804408.2804417","url":null,"abstract":"We have developed an interactive 4-D visualization system that employed the principal vanishing points operation as a method to control the movement of the eye-point and the change in the viewing direction in 4-D space. Different from conventional 4-D visualization and interaction techniques, the system can provide intuitive observation of 4-D space and objects by projecting them onto 3D space in real time from various positions and directions in 4-D space. Our next challenge is to examine whether humans are able to develop a spatial perception of 4-D space and objects through 4-D experiences provided by the system. In this paper, as the first step toward our aim, we assessed whether participants were able to get intuitive spatial understanding of 4-D objects. In the evaluation experiment, firstly, the participants learned a structure of a hypercube. Then, we evaluated their spatial perception developed in the learning period by tasks of controlling the 4-D eye-point and reconstructing the hypercube from a set of its 3-D projection drawings. The results indicated evidence for that humans were able to get 4-D spatial perception by operating the system.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"127 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132802472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","authors":"","doi":"10.1145/2804408","DOIUrl":"https://doi.org/10.1145/2804408","url":null,"abstract":"","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129388292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}