首页 > 最新文献

ACM Transactions on Applied Perception最新文献

英文 中文
Depth Artifacts Caused by Spatial Interlacing in Stereoscopic 3D Displays 立体三维显示中空间交错引起的深度伪影
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2015-02-17 DOI: 10.1145/2699266
Jussi H. Hakala, P. Oittinen, J. Häkkinen
Most spatially interlacing stereoscopic 3D displays display odd and even rows of an image to either the left or the right eye of the viewer. The visual system then fuses the interlaced image into a single percept. This row-based interlacing creates a small vertical disparity between the images; however, interlacing may also induce horizontal disparities, thus generating depth artifacts. Whether people perceive the depth artifacts and, if so, what is the magnitude of the artifacts are unknown. In this study, we hypothesized and tested if people perceive interlaced edges on different depth levels. We tested oblique edge orientations ranging from 2 degrees to 32 degrees and pixel sizes ranging from 16 to 79 arcsec of visual angle in a depth probe experiment. Five participants viewed the visual stimuli through a stereoscope under three viewing conditions: noninterlaced, interlaced, and row averaged (i.e., where even and odd rows are averaged). Our results indicated that people perceive depth artifacts when viewing interlaced stereoscopic images and that these depth artifacts increase with pixel size and decrease with edge orientation angle. A pixel size of 32 arcsec of visual angle still evoked depth percepts, whereas 16 arcsec did not. Row-averaging images effectively eliminated these depth artifacts. These findings have implications for display design, content production, image quality studies, and stereoscopic games and software.
大多数空间交错立体3D显示器将图像的奇数行和偶数行显示给观看者的左眼或右眼。然后,视觉系统将交错的图像融合成一个单一的感知。这种基于行的交错在图像之间产生了一个小的垂直差;然而,隔行也可能引起水平差异,从而产生深度伪影。人们是否感知到深度人工制品,如果是,人工制品的大小是未知的。在这项研究中,我们假设并测试了人们是否在不同的深度层次上感知到交错的边缘。在深度探测实验中,我们测试了倾斜边缘方向从2度到32度,像素大小从16到79弧秒的视角。五名参与者在三种观看条件下通过立体镜观看视觉刺激:非隔行、隔行和平均行(即偶数行和奇数行平均)。结果表明,人们在观看隔行立体图像时,会感知到深度伪影,并且这些深度伪影随像素大小而增大,随边缘方向角度而减小。当像素大小为32弧秒的视角时,深度感知仍然存在,而16弧秒的视角则没有。行平均图像有效地消除了这些深度伪影。这些发现对显示设计、内容制作、图像质量研究、立体游戏和软件都有启示意义。
{"title":"Depth Artifacts Caused by Spatial Interlacing in Stereoscopic 3D Displays","authors":"Jussi H. Hakala, P. Oittinen, J. Häkkinen","doi":"10.1145/2699266","DOIUrl":"https://doi.org/10.1145/2699266","url":null,"abstract":"Most spatially interlacing stereoscopic 3D displays display odd and even rows of an image to either the left or the right eye of the viewer. The visual system then fuses the interlaced image into a single percept. This row-based interlacing creates a small vertical disparity between the images; however, interlacing may also induce horizontal disparities, thus generating depth artifacts. Whether people perceive the depth artifacts and, if so, what is the magnitude of the artifacts are unknown. In this study, we hypothesized and tested if people perceive interlaced edges on different depth levels. We tested oblique edge orientations ranging from 2 degrees to 32 degrees and pixel sizes ranging from 16 to 79 arcsec of visual angle in a depth probe experiment. Five participants viewed the visual stimuli through a stereoscope under three viewing conditions: noninterlaced, interlaced, and row averaged (i.e., where even and odd rows are averaged). Our results indicated that people perceive depth artifacts when viewing interlaced stereoscopic images and that these depth artifacts increase with pixel size and decrease with edge orientation angle. A pixel size of 32 arcsec of visual angle still evoked depth percepts, whereas 16 arcsec did not. Row-averaging images effectively eliminated these depth artifacts. These findings have implications for display design, content production, image quality studies, and stereoscopic games and software.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"99 1","pages":"3:1-3:13"},"PeriodicalIF":1.6,"publicationDate":"2015-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79489791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Comparative Perceptual Study of Soft-Shadow Algorithms 软阴影算法的比较感知研究
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-07-01 DOI: 10.1145/2620029
Michael Hecher, M. Bernhard, O. Mattausch, D. Scherzer, M. Wimmer
We performed a perceptual user study of algorithms that approximate soft shadows in real time. Although a huge body of soft-shadow algorithms have been proposed, to our knowledge this is the first methodical study for comparing different real-time shadow algorithms with respect to their plausibility and visual appearance. We evaluated soft-shadow properties like penumbra overlap with respect to their relevance to shadow perception in a systematic way, and we believe that our results can be useful to guide future shadow approaches in their methods of evaluation. In this study, we also capture the predominant case of an inexperienced user observing shadows without comparing to a reference solution, such as when watching a movie or playing a game. One important result of this experiment is to scientifically verify that real-time soft-shadow algorithms, despite having become physically based and very realistic, can nevertheless be intuitively distinguished from a correct solution by untrained users.
我们对实时逼近软阴影的算法进行了感知用户研究。虽然已经提出了大量的软阴影算法,但据我们所知,这是第一个比较不同实时阴影算法的可行性和视觉外观的系统研究。我们以系统的方式评估了像半影重叠这样的软阴影属性与阴影感知的相关性,我们相信我们的结果可以用于指导未来的阴影评估方法。在这项研究中,我们还捕获了一个没有经验的用户观察阴影而不比较参考解决方案的主要情况,例如在看电影或玩游戏时。本实验的一个重要结果是科学地验证了实时软阴影算法,尽管已经变得基于物理并且非常逼真,但未经训练的用户仍然可以直观地将其与正确的解决方案区分开来。
{"title":"A Comparative Perceptual Study of Soft-Shadow Algorithms","authors":"Michael Hecher, M. Bernhard, O. Mattausch, D. Scherzer, M. Wimmer","doi":"10.1145/2620029","DOIUrl":"https://doi.org/10.1145/2620029","url":null,"abstract":"We performed a perceptual user study of algorithms that approximate soft shadows in real time. Although a huge body of soft-shadow algorithms have been proposed, to our knowledge this is the first methodical study for comparing different real-time shadow algorithms with respect to their plausibility and visual appearance. We evaluated soft-shadow properties like penumbra overlap with respect to their relevance to shadow perception in a systematic way, and we believe that our results can be useful to guide future shadow approaches in their methods of evaluation. In this study, we also capture the predominant case of an inexperienced user observing shadows without comparing to a reference solution, such as when watching a movie or playing a game. One important result of this experiment is to scientifically verify that real-time soft-shadow algorithms, despite having become physically based and very realistic, can nevertheless be intuitively distinguished from a correct solution by untrained users.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"31 1","pages":"5:1-5:21"},"PeriodicalIF":1.6,"publicationDate":"2014-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87890689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Human Perception of Visual Realism for Photo and Computer-Generated Face Images 人类对照片和计算机生成的人脸图像的视觉真实感的感知
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-07-01 DOI: 10.1145/2620030
Shaojing Fan, Rangding Wang, T. Ng, Cheston Tan, Jonathan S. Herberg, Bryan L. Koenig
Computer-generated (CG) face images are common in video games, advertisements, and other media. CG faces vary in their degree of realism, a factor that impacts viewer reactions. Therefore, efficient control of visual realism of face images is important. Efficient control is enabled by a deep understanding of visual realism perception: the extent to which viewers judge an image as a real photograph rather than a CG image. Across two experiments, we explored the processes involved in visual realism perception of face images. In Experiment 1, participants made visual realism judgments on original face images, inverted face images, and images of faces that had the top and bottom halves misaligned. In Experiment 2, participants made visual realism judgments on original face images, scrambled faces, and images that showed different parts of faces. Our findings indicate that both holistic and piecemeal processing are involved in visual realism perception of faces, with holistic processing becoming more dominant when resolution is lower. Our results also suggest that shading information is more important than color for holistic processing, and that inversion makes visual realism judgments harder for realistic images but not for unrealistic images. Furthermore, we found that eyes are the most influential face part for visual realism, and face context is critical for evaluating realism of face parts. To the best of our knowledge, this work is a first realism-centric study attempting to bridge the human perception of visual realism on face images with general face perception tasks.
计算机生成的人脸图像在视频游戏、广告和其他媒体中很常见。CG人脸的真实感程度各不相同,这是影响观众反应的一个因素。因此,有效地控制人脸图像的视觉真实感是很重要的。有效的控制是通过对视觉现实主义感知的深刻理解实现的:观众将图像判断为真实照片而不是CG图像的程度。通过两个实验,我们探索了人脸图像的视觉真实感感知过程。在实验1中,被试对原始人脸图像、倒立人脸图像和上下半部分不对齐的人脸图像进行视觉真实感判断。在实验2中,被试对原始人脸图像、打乱后的人脸图像和显示人脸不同部位的图像进行视觉真实感判断。研究结果表明,整体加工和碎片加工都参与了人脸的视觉真实感感知,当分辨率较低时,整体加工更占优势。我们的研究结果还表明,在整体处理中,阴影信息比颜色信息更重要,并且反演使得对真实图像的视觉真实感判断更加困难,而对非真实图像则没有影响。此外,我们发现眼睛是对视觉真实感影响最大的面部部位,面部情境是评估面部部位真实感的关键。据我们所知,这项工作是第一个以现实主义为中心的研究,试图将人类对面部图像的视觉现实主义感知与一般面部感知任务联系起来。
{"title":"Human Perception of Visual Realism for Photo and Computer-Generated Face Images","authors":"Shaojing Fan, Rangding Wang, T. Ng, Cheston Tan, Jonathan S. Herberg, Bryan L. Koenig","doi":"10.1145/2620030","DOIUrl":"https://doi.org/10.1145/2620030","url":null,"abstract":"Computer-generated (CG) face images are common in video games, advertisements, and other media. CG faces vary in their degree of realism, a factor that impacts viewer reactions. Therefore, efficient control of visual realism of face images is important. Efficient control is enabled by a deep understanding of visual realism perception: the extent to which viewers judge an image as a real photograph rather than a CG image. Across two experiments, we explored the processes involved in visual realism perception of face images. In Experiment 1, participants made visual realism judgments on original face images, inverted face images, and images of faces that had the top and bottom halves misaligned. In Experiment 2, participants made visual realism judgments on original face images, scrambled faces, and images that showed different parts of faces. Our findings indicate that both holistic and piecemeal processing are involved in visual realism perception of faces, with holistic processing becoming more dominant when resolution is lower. Our results also suggest that shading information is more important than color for holistic processing, and that inversion makes visual realism judgments harder for realistic images but not for unrealistic images. Furthermore, we found that eyes are the most influential face part for visual realism, and face context is critical for evaluating realism of face parts. To the best of our knowledge, this work is a first realism-centric study attempting to bridge the human perception of visual realism on face images with general face perception tasks.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"1 1","pages":"7:1-7:21"},"PeriodicalIF":1.6,"publicationDate":"2014-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82886944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Towards the Temporally Perfect Virtual Button: Touch-Feedback Simultaneity and Perceived Quality in Mobile Touchscreen Press Interactions 走向时间上完美的虚拟按键:触摸反馈的同时性和感知质量在移动触屏按键交互中
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-07-01 DOI: 10.1145/2611387
Topi Kaaresoja, S. Brewster, V. Lantz
Pressing a virtual button is still the major interaction method in touchscreen mobile phones. Although phones are becoming more and more powerful, operating system software is getting more and more complex, causing latency in interaction. We were interested in gaining insight into touch-feedback simultaneity and the effects of latency on the perceived quality of touchscreen buttons. In an experiment, we varied the latency between touch and feedback between 0 and 300 ms for tactile, audio, and visual feedback modalities. We modelled the proportion of simultaneity perception as a function of latency for each modality condition. We used a Gaussian model fitted with the maximum likelihood estimation method to the observations. These models showed that the point of subjective simultaneity (PSS) was 5ms for tactile, 19ms for audio, and 32ms for visual feedback. Our study included the scoring of perceived quality for all of the different latency conditions. The perceived quality dropped significantly between latency conditions 70 and 100 ms when the feedback modality was tactile or audio, and between 100 and 150 ms when the feedback modality was visual. When the latency was 300ms for all feedback modalities, the quality of the buttons was rated significantly lower than in all of the other latency conditions, suggesting that a long latency between a touch on the screen and feedback is problematic for users. Together with PSS and these quality ratings, a 75% threshold was established to define a guideline for the recommended latency range between touch and feedback. Our guideline suggests that tactile feedback latency should be between 5 and 50 ms, audio feedback latency between 20 and 70 ms, and visual feedback latency between 30 and 85 ms. Using these values will ensure that users will perceive the feedback as simultaneous with the finger's touch. These values also ensure that the users do not perceive reduced quality. These results will guide engineers and designers of touchscreen interactions by showing the trade-offs between latency and user preference and the effects that their choices might have on the quality of the interactions and feedback they design.
在触屏手机中,虚拟按键仍然是主要的交互方式。虽然手机越来越强大,但操作系统软件也越来越复杂,导致交互延迟。我们对触控反馈的同时性以及延迟对触控按钮感知质量的影响很感兴趣。在一项实验中,我们将触觉、听觉和视觉反馈模式的触觉和反馈之间的延迟时间在0到300毫秒之间变化。我们将同时性感知的比例建模为每个模态条件下延迟的函数。我们使用高斯模型拟合最大似然估计方法对观测值进行拟合。实验结果表明,触觉反馈的主观同时性点为5ms,听觉反馈的主观同时性点为19ms,视觉反馈的主观同时性点为32ms。我们的研究包括对所有不同延迟条件下的感知质量评分。当反馈方式为触觉或听觉时,感知质量在70 ~ 100 ms潜伏期之间显著下降;当反馈方式为视觉时,在100 ~ 150 ms潜伏期之间显著下降。当所有反馈模式的延迟为300ms时,按钮的质量被评为明显低于所有其他延迟条件,这表明触摸屏幕和反馈之间的长延迟对用户来说是有问题的。与PSS和这些质量评级一起,建立了75%的阈值,以定义触摸和反馈之间的推荐延迟范围的指导方针。我们的指南建议触觉反馈延迟在5 - 50 ms之间,音频反馈延迟在20 - 70 ms之间,视觉反馈延迟在30 - 85 ms之间。使用这些值将确保用户将感知到反馈与手指的触摸是同步的。这些值还确保用户不会感觉到质量下降。这些结果将通过显示延迟和用户偏好之间的权衡,以及他们的选择可能对交互质量和他们设计的反馈产生的影响,来指导工程师和触摸屏交互设计师。
{"title":"Towards the Temporally Perfect Virtual Button: Touch-Feedback Simultaneity and Perceived Quality in Mobile Touchscreen Press Interactions","authors":"Topi Kaaresoja, S. Brewster, V. Lantz","doi":"10.1145/2611387","DOIUrl":"https://doi.org/10.1145/2611387","url":null,"abstract":"Pressing a virtual button is still the major interaction method in touchscreen mobile phones. Although phones are becoming more and more powerful, operating system software is getting more and more complex, causing latency in interaction. We were interested in gaining insight into touch-feedback simultaneity and the effects of latency on the perceived quality of touchscreen buttons. In an experiment, we varied the latency between touch and feedback between 0 and 300 ms for tactile, audio, and visual feedback modalities. We modelled the proportion of simultaneity perception as a function of latency for each modality condition. We used a Gaussian model fitted with the maximum likelihood estimation method to the observations. These models showed that the point of subjective simultaneity (PSS) was 5ms for tactile, 19ms for audio, and 32ms for visual feedback. Our study included the scoring of perceived quality for all of the different latency conditions. The perceived quality dropped significantly between latency conditions 70 and 100 ms when the feedback modality was tactile or audio, and between 100 and 150 ms when the feedback modality was visual. When the latency was 300ms for all feedback modalities, the quality of the buttons was rated significantly lower than in all of the other latency conditions, suggesting that a long latency between a touch on the screen and feedback is problematic for users. Together with PSS and these quality ratings, a 75% threshold was established to define a guideline for the recommended latency range between touch and feedback. Our guideline suggests that tactile feedback latency should be between 5 and 50 ms, audio feedback latency between 20 and 70 ms, and visual feedback latency between 30 and 85 ms. Using these values will ensure that users will perceive the feedback as simultaneous with the finger's touch. These values also ensure that the users do not perceive reduced quality. These results will guide engineers and designers of touchscreen interactions by showing the trade-offs between latency and user preference and the effects that their choices might have on the quality of the interactions and feedback they design.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"38 1","pages":"9:1-9:25"},"PeriodicalIF":1.6,"publicationDate":"2014-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84842058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
A New Hypothesis on Facial Beauty Perception 面部美感感知的新假说
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-07-01 DOI: 10.1145/2622655
Fangmei Chen, Yong Xu, D. Zhang
In this article, a new hypothesis on facial beauty perception is proposed: the weighted average of two facial geometric features is more attractive than the inferior one between them. Extensive evidences support the new hypothesis. We collected 390 well-known beautiful face images (e.g., Miss Universe, movie stars, and super models) as well as 409 common face images from multiple sources. Dozens of volunteers rated the face images according to their attractiveness. Statistical regression models are trained on this database. Under the empirical risk principle, the hypothesis is tested on 318,801 pairs of images and receives consistently supportive results. A corollary of the hypothesis is attractive facial geometric features construct a convex set. This corollary derives a convex hull based face beautification method, which guarantees attractiveness and minimizes the before--after difference. Experimental results show its superiority to state-of-the-art geometric based face beautification methods. Moreover, the mainstream hypotheses on facial beauty perception (e.g., the averageness, symmetry, and golden ratio hypotheses) are proved to be compatible with the proposed hypothesis.
本文提出了一种新的关于面部美感感知的假设:两个面部几何特征的加权平均值比其中较差的一个更有吸引力。大量的证据支持这个新假设。我们从多个来源收集了390张众所周知的美丽面部图像(如环球小姐、电影明星和超模)和409张普通面部图像。数十名志愿者根据照片的吸引力给它们打分。统计回归模型是在这个数据库上训练的。在经验风险原则下,对318801对图像进行了假设检验,得到了一致的支持结果。该假设的一个推论是吸引面部的几何特征构成一个凸集。这一推论衍生出一种基于凸壳的面部美容方法,既保证了吸引力,又将前后差异降到最低。实验结果表明,该方法优于目前最先进的基于几何的人脸美化方法。此外,研究还证明了面部美感感知的主流假设(如平均、对称和黄金比例假设)与本文提出的假设是相容的。
{"title":"A New Hypothesis on Facial Beauty Perception","authors":"Fangmei Chen, Yong Xu, D. Zhang","doi":"10.1145/2622655","DOIUrl":"https://doi.org/10.1145/2622655","url":null,"abstract":"In this article, a new hypothesis on facial beauty perception is proposed: the weighted average of two facial geometric features is more attractive than the inferior one between them. Extensive evidences support the new hypothesis. We collected 390 well-known beautiful face images (e.g., Miss Universe, movie stars, and super models) as well as 409 common face images from multiple sources. Dozens of volunteers rated the face images according to their attractiveness. Statistical regression models are trained on this database. Under the empirical risk principle, the hypothesis is tested on 318,801 pairs of images and receives consistently supportive results. A corollary of the hypothesis is attractive facial geometric features construct a convex set. This corollary derives a convex hull based face beautification method, which guarantees attractiveness and minimizes the before--after difference. Experimental results show its superiority to state-of-the-art geometric based face beautification methods. Moreover, the mainstream hypotheses on facial beauty perception (e.g., the averageness, symmetry, and golden ratio hypotheses) are proved to be compatible with the proposed hypothesis.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"12 1","pages":"8:1-8:20"},"PeriodicalIF":1.6,"publicationDate":"2014-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79567770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Perceptual Evaluation of Motion Editing for Realistic Throwing Animations 逼真投掷动画运动编辑的感性评价
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-07-01 DOI: 10.1145/2617916
Michele Vicovaro, Ludovic Hoyet, L. Burigana, C. O'Sullivan
Animation budget constraints during the development of a game often call for the use of a limited set of generic motions. Editing operations are thus generally required to animate virtual characters with a sufficient level of variety. Evaluating the perceptual plausibility of edited animations can therefore contribute greatly towards producing visually plausible animations. In this article, we study observers’ sensitivity to manipulations of overarm and underarm biological throwing animations. In the first experiment, we modified the release velocity of the ball while leaving the motion of the virtual thrower and the angle of release of the ball unchanged. In the second experiment, we evaluated the possibility of further modifying throwing animations by simultaneously editing the motion of the thrower and the release velocity of the ball, using dynamic time warping. In both experiments, we found that participants perceived shortened underarm throws to be particularly unnatural. We also found that modifying the thrower's motion in addition to modifying the release velocity of the ball does not significantly improve the perceptual plausibility of edited throwing animations. In the third experiment, we modified the angle of release of the ball while leaving the magnitude of release velocity and the motion of the thrower unchanged, and found that this editing operation is efficient for improving the perceptual plausibility of shortened underarm throws. Finally, in Experiment 4, we replaced the virtual human thrower with a mechanical throwing device (a ramp) and found the opposite pattern of sensitivity to modifications of the release velocity, indicating that biological and physical throws are subject to different perceptual rules. Our results provide valuable guidelines for developers of games and virtual reality applications by specifying thresholds for the perceptual plausibility of throwing manipulations while also providing several interesting insights for researchers in visual perception of biological motion.
在游戏开发过程中的动画预算限制通常要求使用有限的通用动作集。因此,编辑操作通常需要动画虚拟角色具有足够的多样性。因此,评估编辑动画的感知合理性可以大大有助于制作视觉上可信的动画。在本文中,我们研究了观察者对臂上和臂下生物投掷动画操作的敏感性。在第一个实验中,我们修改了球的释放速度,同时保持虚拟投掷者的运动和球的释放角度不变。在第二个实验中,我们评估了进一步修改投掷动画的可能性,通过同时编辑投掷者的运动和球的释放速度,使用动态时间扭曲。在这两个实验中,我们发现参与者认为短臂下抛球特别不自然。我们还发现,除了修改球的释放速度外,修改投掷者的运动并不能显著提高编辑过的投掷动画的感知合理性。在第三个实验中,我们在保持球的释放速度大小和投掷者的运动不变的情况下修改了球的释放角度,发现这种编辑操作对于提高短臂下投掷的感知合理性是有效的。最后,在实验4中,我们用一个机械投掷装置(一个斜坡)代替了虚拟的人类投掷者,发现了对释放速度变化的相反的敏感性模式,这表明生物投掷和物理投掷受不同的感知规则约束。我们的研究结果通过指定投掷操作的感知合理性阈值,为游戏和虚拟现实应用的开发人员提供了有价值的指导,同时也为生物运动的视觉感知研究人员提供了一些有趣的见解。
{"title":"Perceptual Evaluation of Motion Editing for Realistic Throwing Animations","authors":"Michele Vicovaro, Ludovic Hoyet, L. Burigana, C. O'Sullivan","doi":"10.1145/2617916","DOIUrl":"https://doi.org/10.1145/2617916","url":null,"abstract":"Animation budget constraints during the development of a game often call for the use of a limited set of generic motions. Editing operations are thus generally required to animate virtual characters with a sufficient level of variety. Evaluating the perceptual plausibility of edited animations can therefore contribute greatly towards producing visually plausible animations. In this article, we study observers’ sensitivity to manipulations of overarm and underarm biological throwing animations. In the first experiment, we modified the release velocity of the ball while leaving the motion of the virtual thrower and the angle of release of the ball unchanged. In the second experiment, we evaluated the possibility of further modifying throwing animations by simultaneously editing the motion of the thrower and the release velocity of the ball, using dynamic time warping. In both experiments, we found that participants perceived shortened underarm throws to be particularly unnatural. We also found that modifying the thrower's motion in addition to modifying the release velocity of the ball does not significantly improve the perceptual plausibility of edited throwing animations. In the third experiment, we modified the angle of release of the ball while leaving the magnitude of release velocity and the motion of the thrower unchanged, and found that this editing operation is efficient for improving the perceptual plausibility of shortened underarm throws. Finally, in Experiment 4, we replaced the virtual human thrower with a mechanical throwing device (a ramp) and found the opposite pattern of sensitivity to modifications of the release velocity, indicating that biological and physical throws are subject to different perceptual rules. Our results provide valuable guidelines for developers of games and virtual reality applications by specifying thresholds for the perceptual plausibility of throwing manipulations while also providing several interesting insights for researchers in visual perception of biological motion.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"36 1","pages":"10:1-10:23"},"PeriodicalIF":1.6,"publicationDate":"2014-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87394400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Olfactory Adaptation in Virtual Environments 虚拟环境中的嗅觉适应
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-07-01 DOI: 10.1145/2617917
Belma Ramic-Brkic, A. Chalmers
Visual perception is becoming increasingly important in computer graphics. Research on human visual perception has led to the development of perception-driven computer graphics techniques, where knowledge of the human visual system (HVS) and, in particular, its weaknesses are exploited when rendering and displaying 3D graphics. Findings on limitations of the HVS have been used to maintain high perceived quality but reduce the computed quality of some of the image without this quality difference being perceived. This article investigates the amount of time for which (if at all) such limitations could be exploited in the presence of smell. The results show that for our experiment, adaptation to smell does indeed affect participants’ ability to determine quality difference in the animations. Having been exposed to a smell before undertaking the experiment, participants were able to determine the quality in a similar fashion to the “no smell” condition, whereas without adaptation, participants were not able to distinguish the quality difference.
视觉感知在计算机图形学中变得越来越重要。对人类视觉感知的研究导致了感知驱动的计算机图形技术的发展,在这些技术中,人类视觉系统(HVS)的知识,特别是它的弱点,在渲染和显示3D图形时被利用。关于HVS局限性的研究结果已用于保持高感知质量,但在没有感知到这种质量差异的情况下降低某些图像的计算质量。本文研究了在气味存在的情况下,这些限制可以被利用的时间(如果有的话)。结果表明,在我们的实验中,对气味的适应确实会影响参与者判断动画质量差异的能力。在进行实验之前,参与者暴露在一种气味中,他们能够以与“没有气味”条件相似的方式确定质量,而没有适应,参与者无法区分质量差异。
{"title":"Olfactory Adaptation in Virtual Environments","authors":"Belma Ramic-Brkic, A. Chalmers","doi":"10.1145/2617917","DOIUrl":"https://doi.org/10.1145/2617917","url":null,"abstract":"Visual perception is becoming increasingly important in computer graphics. Research on human visual perception has led to the development of perception-driven computer graphics techniques, where knowledge of the human visual system (HVS) and, in particular, its weaknesses are exploited when rendering and displaying 3D graphics. Findings on limitations of the HVS have been used to maintain high perceived quality but reduce the computed quality of some of the image without this quality difference being perceived. This article investigates the amount of time for which (if at all) such limitations could be exploited in the presence of smell. The results show that for our experiment, adaptation to smell does indeed affect participants’ ability to determine quality difference in the animations. Having been exposed to a smell before undertaking the experiment, participants were able to determine the quality in a similar fashion to the “no smell” condition, whereas without adaptation, participants were not able to distinguish the quality difference.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"70 1","pages":"6:1-6:16"},"PeriodicalIF":1.6,"publicationDate":"2014-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84103090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Improving Transparency in Teleoperation by Means of Cutaneous Tactile Force Feedback 利用皮肤触觉力反馈提高遥操作透明度
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-04-01 DOI: 10.1145/2604969
C. Pacchierotti, Asad Tirmizi, D. Prattichizzo
A study on the role of cutaneous and kinesthetic force feedback in teleoperation is presented. Cutaneous cues provide less transparency than kinesthetic force, but they do not affect the stability of the teleoperation system. On the other hand, kinesthesia provides a compelling illusion of telepresence but affects the stability of the haptic loop. However, when employing common grounded haptic interfaces, it is not possible to independently control the cutaneous and kinesthetic components of the interaction. For this reason, many control techniques ensure a stable interaction by scaling down both kinesthetic and cutaneous force feedback, even though acting on the cutaneous channel is not necessary. We discuss here the feasibility of a novel approach. It aims at improving the realism of the haptic rendering, while preserving its stability, by modulating cutaneous force to compensate for a lack of kinesthesia. We carried out two teleoperation experiments, evaluating (1) the role of cutaneous stimuli when reducing kinesthesia and (2) the extent to which an overactuation of the cutaneous channel can fully compensate for a lack of kinesthetic force feedback. Results showed that, to some extent, it is possible to compensate for a lack of kinesthesia with the aforementioned technique, without significant performance degradation. Moreover, users showed a high comfort level in using the proposed system.
本文就皮肤和动觉力反馈在遥操作中的作用进行了研究。皮肤提示提供的透明度不如动觉力,但它们不影响远程操作系统的稳定性。另一方面,动觉提供了一种令人信服的临场感错觉,但影响了触觉回路的稳定性。然而,当使用普通接地触觉接口时,不可能独立控制交互的皮肤和动觉组件。由于这个原因,许多控制技术通过减小动觉和皮肤力反馈来确保稳定的相互作用,即使作用于皮肤通道是不必要的。我们在此讨论一种新方法的可行性。它的目的是提高触觉渲染的真实感,同时保持其稳定性,通过调节皮肤的力量来补偿运动感的缺乏。我们进行了两个远程操作实验,评估(1)皮肤刺激在减少动觉时的作用,(2)皮肤通道的过度激活可以在多大程度上完全补偿缺乏动觉力反馈。结果表明,在某种程度上,可以用上述技术补偿运动感的缺乏,而不会显着降低性能。此外,用户在使用建议的系统时表现出很高的舒适度。
{"title":"Improving Transparency in Teleoperation by Means of Cutaneous Tactile Force Feedback","authors":"C. Pacchierotti, Asad Tirmizi, D. Prattichizzo","doi":"10.1145/2604969","DOIUrl":"https://doi.org/10.1145/2604969","url":null,"abstract":"A study on the role of cutaneous and kinesthetic force feedback in teleoperation is presented. Cutaneous cues provide less transparency than kinesthetic force, but they do not affect the stability of the teleoperation system. On the other hand, kinesthesia provides a compelling illusion of telepresence but affects the stability of the haptic loop. However, when employing common grounded haptic interfaces, it is not possible to independently control the cutaneous and kinesthetic components of the interaction. For this reason, many control techniques ensure a stable interaction by scaling down both kinesthetic and cutaneous force feedback, even though acting on the cutaneous channel is not necessary.\u0000 We discuss here the feasibility of a novel approach. It aims at improving the realism of the haptic rendering, while preserving its stability, by modulating cutaneous force to compensate for a lack of kinesthesia. We carried out two teleoperation experiments, evaluating (1) the role of cutaneous stimuli when reducing kinesthesia and (2) the extent to which an overactuation of the cutaneous channel can fully compensate for a lack of kinesthetic force feedback. Results showed that, to some extent, it is possible to compensate for a lack of kinesthesia with the aforementioned technique, without significant performance degradation. Moreover, users showed a high comfort level in using the proposed system.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"40 1","pages":"4:1-4:16"},"PeriodicalIF":1.6,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82871514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
The Role of Sound Source Perception in Gestural Sound Description 声源知觉在手势声音描述中的作用
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-04-01 DOI: 10.1145/2536811
Baptiste Caramiaux, Frédéric Bevilacqua, Tommaso Bianco, Norbert Schnell, Olivier Houix, P. Susini
We investigated gesture description of sound stimuli performed during a listening task. Our hypothesis is that the strategies in gestural responses depend on the level of identification of the sound source and specifically on the identification of the action causing the sound. To validate our hypothesis, we conducted two experiments. In the first experiment, we built two corpora of sounds. The first corpus contains sounds with identifiable causal actions. The second contains sounds for which no causal actions could be identified. These corpora properties were validated through a listening test. In the second experiment, participants performed arm and hand gestures synchronously while listening to sounds taken from these corpora. Afterward, we conducted interviews asking participants to verbalize their experience while watching their own video recordings. They were questioned on their perception of the listened sounds and on their gestural strategies. We showed that for the sounds where causal action can be identified, participants mainly mimic the action that has produced the sound. In the other case, when no action can be associated with the sound, participants trace contours related to sound acoustic features. We also found that the interparticipants’ gesture variability is higher for causal sounds compared to noncausal sounds. Variability demonstrates that, in the first case, participants have several ways of producing the same action, whereas in the second case, the sound features tend to make the gesture responses consistent.
我们研究了在听力任务中进行的声音刺激的手势描述。我们的假设是,手势反应的策略取决于对声源的识别水平,特别是对引起声音的动作的识别。为了验证我们的假设,我们进行了两个实验。在第一个实验中,我们建立了两个声音语料库。第一个语料库包含具有可识别因果行为的声音。第二种声音包含无法识别因果行为的声音。这些语料库属性通过听力测试得到验证。在第二个实验中,参与者一边听取自这些语料库的声音,一边同步做手臂和手势。之后,我们进行了采访,要求参与者在观看自己录制的视频时用语言描述他们的经历。研究人员询问了他们对所听声音的感知以及他们的手势策略。我们发现,对于可以识别因果关系的声音,参与者主要模仿产生声音的动作。在另一种情况下,当没有动作可以与声音相关联时,参与者追踪与声音声学特征相关的轮廓。我们还发现,与非因果音相比,参与者对因果音的手势可变性更高。可变性表明,在第一种情况下,参与者有几种方式产生相同的动作,而在第二种情况下,声音特征倾向于使手势反应一致。
{"title":"The Role of Sound Source Perception in Gestural Sound Description","authors":"Baptiste Caramiaux, Frédéric Bevilacqua, Tommaso Bianco, Norbert Schnell, Olivier Houix, P. Susini","doi":"10.1145/2536811","DOIUrl":"https://doi.org/10.1145/2536811","url":null,"abstract":"We investigated gesture description of sound stimuli performed during a listening task. Our hypothesis is that the strategies in gestural responses depend on the level of identification of the sound source and specifically on the identification of the action causing the sound. To validate our hypothesis, we conducted two experiments. In the first experiment, we built two corpora of sounds. The first corpus contains sounds with identifiable causal actions. The second contains sounds for which no causal actions could be identified. These corpora properties were validated through a listening test. In the second experiment, participants performed arm and hand gestures synchronously while listening to sounds taken from these corpora. Afterward, we conducted interviews asking participants to verbalize their experience while watching their own video recordings. They were questioned on their perception of the listened sounds and on their gestural strategies. We showed that for the sounds where causal action can be identified, participants mainly mimic the action that has produced the sound. In the other case, when no action can be associated with the sound, participants trace contours related to sound acoustic features. We also found that the interparticipants’ gesture variability is higher for causal sounds compared to noncausal sounds. Variability demonstrates that, in the first case, participants have several ways of producing the same action, whereas in the second case, the sound features tend to make the gesture responses consistent.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"100 1","pages":"1:1-1:19"},"PeriodicalIF":1.6,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78247205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Online 3D Gaze Localization on Stereoscopic Displays 基于立体显示器的在线三维凝视定位
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2014-04-01 DOI: 10.1145/2593689
Rui I. Wang, Brandon Pelfrey, A. Duchowski, D. House
This article summarizes our previous work on developing an online system to allow the estimation of 3D gaze depth using eye tracking in a stereoscopic environment. We report on recent extensions allowing us to report the full 3D gaze position. Our system employs a 3D calibration process that determines the parameters of a mapping from a naive depth estimate, based simply on triangulation, to a refined 3D gaze point estimate tuned to a particular user. We show that our system is an improvement on the geometry-based 3D gaze estimation returned by a proprietary algorithm provided with our tracker. We also compare our approach with that of the Parameterized Self-Organizing Map (PSOM) method, due to Essig and colleagues, which also individually calibrates to each user. We argue that our method is superior in speed and ease of calibration, is easier to implement, and does not require an iterative solver to produce a gaze position, thus guaranteeing computation at the rate of tracker acquisition. In addition, we report on a user study that indicates that, compared with PSOM, our method more accurately estimates gaze depth, and is nearly as accurate in estimating horizontal and vertical position. Results are verified on two different 4D eye tracking systems, a high accuracy Wheatstone haploscope and a medium accuracy active stereo display. Thus, it is the recommended method for applications that primarily require gaze depth information, while its ease of use makes it suitable for many applications requiring full 3D gaze position.
本文总结了我们之前在开发一个在线系统方面的工作,该系统允许在立体环境中使用眼动追踪来估计3D凝视深度。我们报告了最近的扩展,使我们能够报告完整的3D凝视位置。我们的系统采用3D校准过程来确定映射的参数,从简单的基于三角测量的原始深度估计到针对特定用户的精细3D凝视点估计。我们表明,我们的系统是对我们的跟踪器提供的专有算法返回的基于几何的3D凝视估计的改进。我们还将我们的方法与Essig及其同事提出的参数化自组织映射(PSOM)方法进行了比较,后者也针对每个用户进行单独校准。我们认为,我们的方法在速度和易于校准方面具有优势,更容易实现,并且不需要迭代求解器来产生凝视位置,从而保证了跟踪器获取速率的计算。此外,我们报告的一项用户研究表明,与PSOM相比,我们的方法更准确地估计凝视深度,并且在估计水平和垂直位置方面几乎同样准确。在两种不同的4D眼动追踪系统,高精度惠斯通单倍镜和中等精度的主动立体显示器上验证了结果。因此,它是主要需要凝视深度信息的应用程序的推荐方法,而它的易用性使其适用于许多需要全3D凝视位置的应用程序。
{"title":"Online 3D Gaze Localization on Stereoscopic Displays","authors":"Rui I. Wang, Brandon Pelfrey, A. Duchowski, D. House","doi":"10.1145/2593689","DOIUrl":"https://doi.org/10.1145/2593689","url":null,"abstract":"This article summarizes our previous work on developing an online system to allow the estimation of 3D gaze depth using eye tracking in a stereoscopic environment. We report on recent extensions allowing us to report the full 3D gaze position. Our system employs a 3D calibration process that determines the parameters of a mapping from a naive depth estimate, based simply on triangulation, to a refined 3D gaze point estimate tuned to a particular user. We show that our system is an improvement on the geometry-based 3D gaze estimation returned by a proprietary algorithm provided with our tracker. We also compare our approach with that of the Parameterized Self-Organizing Map (PSOM) method, due to Essig and colleagues, which also individually calibrates to each user. We argue that our method is superior in speed and ease of calibration, is easier to implement, and does not require an iterative solver to produce a gaze position, thus guaranteeing computation at the rate of tracker acquisition. In addition, we report on a user study that indicates that, compared with PSOM, our method more accurately estimates gaze depth, and is nearly as accurate in estimating horizontal and vertical position. Results are verified on two different 4D eye tracking systems, a high accuracy Wheatstone haploscope and a medium accuracy active stereo display. Thus, it is the recommended method for applications that primarily require gaze depth information, while its ease of use makes it suitable for many applications requiring full 3D gaze position.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"5 1","pages":"3:1-3:21"},"PeriodicalIF":1.6,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84618455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
期刊
ACM Transactions on Applied Perception
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1