首页 > 最新文献

ACM Transactions on Applied Perception最新文献

英文 中文
Design and Validation of a Virtual Reality Mental Rotation Test 虚拟现实心理旋转测试的设计与验证
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-09 DOI: 10.1145/3626238
Kristin A. Bartlett, Almudena Palacios-Ibáñez, Jorge Dorribo Camba
Mental rotation, a common measure of spatial ability, has traditionally been assessed through paper-based instruments like the Mental Rotation Test (MRT) or the Purdue Spatial Visualization Test: Rotations (PSVT:R). The fact that these instruments present 3D shapes in a 2D format devoid of natural cues like shading and perspective likely limits their ability to accurately assess the fundamental skill of mentally rotating 3D shapes. In this paper, we describe the Virtual Reality Mental Rotation Assessment (VRMRA), a virtual reality-based mental rotation assessment derived from the Revised PSVT:R and MRT. The VRMRA reimagines traditional mental rotation assessments in a room-scale virtual environment and uses hand-tracking and elements of gamification in attempts to create an intuitive, engaging experience for test-takers. To validate the instrument, we compared response patterns in the VRMRA with patterns observed on the MRT and Revised PSVT:R. For the PSVT:R-type questions, items requiring a rotation around two axes were significantly harder than items requiring rotations around a single axis in the VRMRA, which is not the case in the Revised PSVT:R. For the MRT-type questions in the VRMRA, a moderate negative correlation was found between the degree of rotation in the X direction and item difficulty. While the problem of occlusion was reduced, features of the shapes and distractors accounted for 50.6% of the variance in item difficulty. Results suggest that the VRMRA is likely a more accurate tool to assess mental rotation ability in comparison to traditional instruments which present the stimuli through 2D media. Our findings also point to potential problems with the fundamental designs of the Revised PSVT:R and MRT question formats.
心理旋转是一种常见的空间能力测量方法,传统上通过基于纸张的工具进行评估,如心理旋转测试(MRT)或普渡空间可视化测试:旋转(PSVT:R)。事实上,这些工具以2D格式呈现3D形状,缺乏诸如阴影和透视等自然线索,这可能限制了它们准确评估心理旋转3D形状基本技能的能力。在本文中,我们描述了虚拟现实心理旋转评估(VRMRA),这是一种基于虚拟现实的心理旋转评估,源自修订的PSVT:R和MRT。VRMRA在一个房间规模的虚拟环境中重新构想了传统的心理旋转评估,并使用手部追踪和游戏化元素,试图为考生创造一种直观的、引人入胜的体验。为了验证该仪器,我们将VRMRA的反应模式与MRT和修订后的PSVT:R上观察到的反应模式进行了比较。对于PSVT:R类型的问题,在VRMRA中需要绕两个轴旋转的题目明显比需要绕一个轴旋转的题目难,而在修订后的PSVT:R中则不是这样。对于VRMRA中的mrt型题,X方向旋转程度与题目难度呈中度负相关。当遮挡问题减少时,形状和干扰物的特征占项目难度方差的50.6%。结果表明,与通过二维介质呈现刺激的传统仪器相比,VRMRA可能是评估心理旋转能力的更准确的工具。我们的研究结果还指出了修订后的PSVT:R和MRT问题格式的基本设计的潜在问题。
{"title":"Design and Validation of a Virtual Reality Mental Rotation Test","authors":"Kristin A. Bartlett, Almudena Palacios-Ibáñez, Jorge Dorribo Camba","doi":"10.1145/3626238","DOIUrl":"https://doi.org/10.1145/3626238","url":null,"abstract":"Mental rotation, a common measure of spatial ability, has traditionally been assessed through paper-based instruments like the Mental Rotation Test (MRT) or the Purdue Spatial Visualization Test: Rotations (PSVT:R). The fact that these instruments present 3D shapes in a 2D format devoid of natural cues like shading and perspective likely limits their ability to accurately assess the fundamental skill of mentally rotating 3D shapes. In this paper, we describe the Virtual Reality Mental Rotation Assessment (VRMRA), a virtual reality-based mental rotation assessment derived from the Revised PSVT:R and MRT. The VRMRA reimagines traditional mental rotation assessments in a room-scale virtual environment and uses hand-tracking and elements of gamification in attempts to create an intuitive, engaging experience for test-takers. To validate the instrument, we compared response patterns in the VRMRA with patterns observed on the MRT and Revised PSVT:R. For the PSVT:R-type questions, items requiring a rotation around two axes were significantly harder than items requiring rotations around a single axis in the VRMRA, which is not the case in the Revised PSVT:R. For the MRT-type questions in the VRMRA, a moderate negative correlation was found between the degree of rotation in the X direction and item difficulty. While the problem of occlusion was reduced, features of the shapes and distractors accounted for 50.6% of the variance in item difficulty. Results suggest that the VRMRA is likely a more accurate tool to assess mental rotation ability in comparison to traditional instruments which present the stimuli through 2D media. Our findings also point to potential problems with the fundamental designs of the Revised PSVT:R and MRT question formats.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135141330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Haptic Intensity Order Illusion is Caused by Amplitude Changes 触觉强度顺序错觉是由振幅变化引起的
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-02 DOI: 10.1145/3626237
Ivan Makarov, Snorri Steinn Stefánsson Thors, Elvar Atli Aevarsson, Finnur Kári Pind Jörgensson, Nashmin Yeganyeh, Árni Kristjánsson, Runar Unnthorsson
When two brief vibrotactile stimulations are sequentially applied to observers’ lower back,there is systematic mislocalization of the stimulation: If the second stimulation is of higher intensity than the first one, observers tend to respond that the second stimulation was above the first one, and vice versa when weak intensity stimulation follows a strong one. This haptic mislocalization effect has been called the intensity order illusion . In the original demonstration of the illusion, frequency and amplitude of the stimulation were inextricably linked so that changes in amplitude also resulted in changes in frequency. It is therefore unknown whether the illusion is caused by changes in frequency, amplitude or both. To test this, we performed a multifactorial experiment, where we used L5 actuators that allow independent manipulation of frequency and amplitude. This approach enabled us to investigate the effects of stimulus amplitude, frequency and location and assess any potential interactions between these factors. We report four main findings: 1) we were able to replicate the intensity order illusion with the L5 tactors, 2) the illusion mainly occurred in the upwards direction, or in other words, when strong stimulation following a weaker one occurred above or in the same location as the first stimulation, 3) the illusion did not occur when similar stimulation patterns were applied in the horizontal direction and 4) the illusion was solely due to changes in amplitude, while changes in frequency (100 Hz vs 200 Hz) had no effect.
当对观察者的下背部连续施加两个短暂的振动触觉刺激时,会出现系统性的刺激定位错误:如果第二个刺激强度高于第一个刺激强度,观察者倾向于认为第二个刺激高于第一个刺激强度,而当弱强度刺激紧跟着强强度刺激强度时,观察者倾向于认为第二个刺激强度高于第一个刺激强度,反之亦然。这种触觉错位效应被称为强度顺序错觉。在最初的幻觉演示中,刺激的频率和幅度是密不可分的,因此幅度的变化也会导致频率的变化。因此,尚不清楚这种错觉是由频率、幅度或两者的变化引起的。为了测试这一点,我们进行了一个多因素实验,其中我们使用了允许独立操作频率和幅度的L5致动器。这种方法使我们能够研究刺激幅度、频率和位置的影响,并评估这些因素之间的潜在相互作用。我们报告了四个主要发现:1)我们能够复制强度顺序与L5内壁错觉,2)错觉主要发生在向上的方向,或者换句话说,当强烈刺激后发生了较弱的一个高于或在相同的位置作为第一个刺激,3)幻觉没有发生类似的刺激模式应用于水平方向时,4)幻觉完全是由于振幅的变化,而变化的频率(100 Hz vs 200 Hz)没有影响。
{"title":"The Haptic Intensity Order Illusion is Caused by Amplitude Changes","authors":"Ivan Makarov, Snorri Steinn Stefánsson Thors, Elvar Atli Aevarsson, Finnur Kári Pind Jörgensson, Nashmin Yeganyeh, Árni Kristjánsson, Runar Unnthorsson","doi":"10.1145/3626237","DOIUrl":"https://doi.org/10.1145/3626237","url":null,"abstract":"When two brief vibrotactile stimulations are sequentially applied to observers’ lower back,there is systematic mislocalization of the stimulation: If the second stimulation is of higher intensity than the first one, observers tend to respond that the second stimulation was above the first one, and vice versa when weak intensity stimulation follows a strong one. This haptic mislocalization effect has been called the intensity order illusion . In the original demonstration of the illusion, frequency and amplitude of the stimulation were inextricably linked so that changes in amplitude also resulted in changes in frequency. It is therefore unknown whether the illusion is caused by changes in frequency, amplitude or both. To test this, we performed a multifactorial experiment, where we used L5 actuators that allow independent manipulation of frequency and amplitude. This approach enabled us to investigate the effects of stimulus amplitude, frequency and location and assess any potential interactions between these factors. We report four main findings: 1) we were able to replicate the intensity order illusion with the L5 tactors, 2) the illusion mainly occurred in the upwards direction, or in other words, when strong stimulation following a weaker one occurred above or in the same location as the first stimulation, 3) the illusion did not occur when similar stimulation patterns were applied in the horizontal direction and 4) the illusion was solely due to changes in amplitude, while changes in frequency (100 Hz vs 200 Hz) had no effect.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135893088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The effect of interocular contrast differences on the appearance of augmented reality imagery 眼间对比差异对增强现实图像外观的影响
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-29 DOI: 10.1145/3617684
Minqi Wang, Jian Ding, D. Levi, Emily Cooper
Augmented reality (AR) devices seek to create compelling visual experiences that merge virtual imagery with the natural world. These devices often rely on wearable near-eye display systems that can optically overlay digital images to the left and right eyes of the user separately. Ideally, the two eyes should be shown images with minimal radiometric differences (e.g., the same overall luminance, contrast, and color in both eyes), but achieving this binocular equality can be challenging in wearable systems with stringent demands on weight and size. Basic vision research has shown that a spectrum of potentially detrimental perceptual effects can be elicited by imagery with radiometric differences between the eyes, but it is not clear whether and how these findings apply to the experience of modern AR devices. In this work, we first develop a testing paradigm for assessing multiple aspects of visual appearance at once, and characterize five key perceptual factors when participants viewed stimuli with interocular contrast differences. In a second experiment, we simulate optical see-through AR imagery using conventional desktop LCD monitors and use the same paradigm to evaluate the multifaceted perceptual implications when the AR display luminance differs between the two eyes. We also include simulations of monocular AR systems (i.e., systems in which only one eye sees the displayed image). Our results suggest that interocular contrast differences can drive several potentially detrimental perceptual effects in binocular AR systems, such as binocular luster, rivalry, and spurious depth differences. In addition, monocular AR displays tend to have more artifacts than binocular displays with a large contrast difference in the two eyes. A better understanding of the range and likelihood of these perceptual phenomena can help inform design choices that support a high-quality user experience in AR.
增强现实(AR)设备寻求创造引人注目的视觉体验,将虚拟图像与自然世界融合在一起。这些设备通常依赖于可佩戴的近眼显示系统,该系统可以将数字图像分别光学地覆盖在用户的左眼和右眼上。理想情况下,两只眼睛应该显示具有最小辐射差异的图像(例如,两只眼睛的整体亮度、对比度和颜色相同),但在对重量和尺寸有严格要求的可穿戴系统中,实现这种双眼平等可能是一项挑战。基础视觉研究表明,眼睛之间存在辐射差异的图像可以引发一系列潜在的有害感知效应,但尚不清楚这些发现是否以及如何应用于现代AR设备的体验。在这项工作中,我们首先开发了一种测试范式,用于同时评估视觉外观的多个方面,并描述了当参与者观看具有眼间对比度差异的刺激时的五个关键感知因素。在第二个实验中,我们使用传统的台式LCD监视器模拟光学透视AR图像,并使用相同的范式来评估当两只眼睛之间的AR显示亮度不同时的多方面感知影响。我们还包括单目AR系统(即只有一只眼睛看到显示图像的系统)的模拟。我们的研究结果表明,在双目AR系统中,眼间对比度差异可能会导致几种潜在的有害感知效应,如双眼光泽、竞争和虚假深度差异。此外,单眼AR显示器往往比双眼中具有大对比度差异的双眼显示器具有更多的伪影。更好地理解这些感知现象的范围和可能性有助于为支持AR中高质量用户体验的设计选择提供信息。
{"title":"The effect of interocular contrast differences on the appearance of augmented reality imagery","authors":"Minqi Wang, Jian Ding, D. Levi, Emily Cooper","doi":"10.1145/3617684","DOIUrl":"https://doi.org/10.1145/3617684","url":null,"abstract":"Augmented reality (AR) devices seek to create compelling visual experiences that merge virtual imagery with the natural world. These devices often rely on wearable near-eye display systems that can optically overlay digital images to the left and right eyes of the user separately. Ideally, the two eyes should be shown images with minimal radiometric differences (e.g., the same overall luminance, contrast, and color in both eyes), but achieving this binocular equality can be challenging in wearable systems with stringent demands on weight and size. Basic vision research has shown that a spectrum of potentially detrimental perceptual effects can be elicited by imagery with radiometric differences between the eyes, but it is not clear whether and how these findings apply to the experience of modern AR devices. In this work, we first develop a testing paradigm for assessing multiple aspects of visual appearance at once, and characterize five key perceptual factors when participants viewed stimuli with interocular contrast differences. In a second experiment, we simulate optical see-through AR imagery using conventional desktop LCD monitors and use the same paradigm to evaluate the multifaceted perceptual implications when the AR display luminance differs between the two eyes. We also include simulations of monocular AR systems (i.e., systems in which only one eye sees the displayed image). Our results suggest that interocular contrast differences can drive several potentially detrimental perceptual effects in binocular AR systems, such as binocular luster, rivalry, and spurious depth differences. In addition, monocular AR displays tend to have more artifacts than binocular displays with a large contrast difference in the two eyes. A better understanding of the range and likelihood of these perceptual phenomena can help inform design choices that support a high-quality user experience in AR.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46456217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrated passability perception in virtual reality transfers to augmented reality 虚拟现实中校准的可通行性感知转移到增强现实
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-05 DOI: 10.1145/3613450
Holly C. Gagnon, Jeanine Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer
As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45 degrees and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.
随着虚拟现实(VR)和增强现实(AR)技术应用的增加,了解用户如何感知他们在虚拟环境中的行动能力将非常重要。关于动作的反馈可能有助于校准对动作机会(可供性)的感知,以便VR和AR中的动作判断反映演员的真实能力。先前的研究表明,在挥舞物体的同时穿过虚拟门口,可以通过碰撞的反馈来校准对自身通行性的感知。在目前的研究中,我们的目标是在VR中使用不同的范式通过反馈来复制这种校准,同时测试这种校准是否转移到AR。参与者在45度处举着一根杆子,并在AR(预测试阶段)中做出合格判断。然后,他们在VR中做出通过性判断,并通过拿着杆子穿过虚拟门口来接收这些判断的反馈(校准阶段)。然后,参与者返回AR进行测试后通过性判断。结果表明,反馈校准了参与者在虚拟现实中的判断。此外,这种校准转移到AR环境中。换句话说,在经历了VR中的反馈后,VR和AR中的可通过性判断变得更接近演员的实际能力,这可以使这些技术的培训应用更加有效。
{"title":"Calibrated passability perception in virtual reality transfers to augmented reality","authors":"Holly C. Gagnon, Jeanine Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer","doi":"10.1145/3613450","DOIUrl":"https://doi.org/10.1145/3613450","url":null,"abstract":"As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45 degrees and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48802143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Changes in Navigation Over Time: A Comparison of Teleportation and Joystick-based Locomotion 导航随时间的变化:基于遥控器和操纵杆的机车的比较
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-05 DOI: 10.1145/3613902
Moloud Nasiri, John R. Porter, Kristopher Kohm, Andrew C. Robb
Little research has studied how people use Virtual Reality (VR) changes as they experience VR. This paper reports the results of an experiment investigating how users’ behavior with two locomotion methods changed over four weeks: teleportation and joystick-based locomotion. Twenty novice VR users (no more than 1 hour prior experience with any form of walking in VR) were recruited. They loaned an Oculus Quest for four weeks on their own time, including an activity we provided them with. Results showed that the time required to complete the navigation task decreased faster for joystick-based locomotion. Spatial memory improved with time, particularly when using teleportation (which starts disadvantaged to joystick-based locomotion). Also, overall cybersickness decreased slightly over time; two dimensions of cybersickness (nausea and disorientation) increased notably over time using joystick-based navigation.
很少有研究研究人们在体验虚拟现实时如何使用虚拟现实变化。本文报道了一项实验的结果,该实验调查了用户在四周内使用两种移动方法的行为是如何变化的:远程传送和基于操纵杆的移动。招募了20名VR新手用户(不超过1小时的VR行走经验)。他们在自己的时间里租借了一辆Oculus Quest,为期四周,包括我们为他们提供的一项活动。结果表明,对于基于操纵杆的运动,完成导航任务所需的时间减少得更快。空间记忆随着时间的推移而改善,尤其是在使用隐形传送时(这开始对基于操纵杆的移动不利)。此外,随着时间的推移,总体网络病略有下降;使用基于操纵杆的导航,随着时间的推移,网络病的两个维度(恶心和定向障碍)显著增加。
{"title":"Changes in Navigation Over Time: A Comparison of Teleportation and Joystick-based Locomotion","authors":"Moloud Nasiri, John R. Porter, Kristopher Kohm, Andrew C. Robb","doi":"10.1145/3613902","DOIUrl":"https://doi.org/10.1145/3613902","url":null,"abstract":"Little research has studied how people use Virtual Reality (VR) changes as they experience VR. This paper reports the results of an experiment investigating how users’ behavior with two locomotion methods changed over four weeks: teleportation and joystick-based locomotion. Twenty novice VR users (no more than 1 hour prior experience with any form of walking in VR) were recruited. They loaned an Oculus Quest for four weeks on their own time, including an activity we provided them with. Results showed that the time required to complete the navigation task decreased faster for joystick-based locomotion. Spatial memory improved with time, particularly when using teleportation (which starts disadvantaged to joystick-based locomotion). Also, overall cybersickness decreased slightly over time; two dimensions of cybersickness (nausea and disorientation) increased notably over time using joystick-based navigation.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42916637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Participatory Design of Virtual Humans for Mental Health Support Among North American Computer Science Students: Voice, Appearance, and the Similarity-attraction Effect 北美计算机科学专业学生心理健康支持虚拟人的参与性设计:声音、外表和相似吸引效应
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-05 DOI: 10.1145/3613961
P. Feijóo-García, Chase Wrenn, J. Stuart, A. G. de Siqueira, Benjamin C. Lok
Virtual humans (VHs) have the potential to support mental wellness among college computer science (CS) students. However, designing effective VHs for counseling purposes requires a clear understanding of students’ demographics, backgrounds, and expectations. To this end, we conducted two user studies with 216 CS students from a major university in North America. In the first study, we explored how students co-designed VHs to support mental wellness conversations and found that the VHs’ demographics, appearance, and voice closely resembled the characteristics of their designers. In the second study, we investigated how the interplay between the VH’s appearance and voice impacted the agent’s effectiveness in promoting CS students’ intentions toward gratitude journaling. Our findings suggest that the active participation of CS students in VH design leads to the creation of agents that closely resemble their designers. Moreover, we found that the interplay between the VH’s appearance and voice impacts the agent’s effectiveness in promoting CS students’ intentions toward mental wellness techniques.
虚拟人(VHs)有可能支持大学计算机科学(CS)学生的心理健康。然而,为咨询目的设计有效的VH需要清楚地了解学生的人口统计、背景和期望。为此,我们对来自北美一所主要大学的216名CS学生进行了两项用户研究。在第一项研究中,我们探讨了学生们如何共同设计VH来支持心理健康对话,并发现VH的人口统计、外观和声音与他们的设计师的特征非常相似。在第二项研究中,我们调查了VH的外表和声音之间的相互作用如何影响代理人在促进CS学生对感恩日记的意图方面的有效性。我们的研究结果表明,CS学生积极参与VH设计,可以创造出与他们的设计师非常相似的代理。此外,我们发现VH的外表和声音之间的相互作用影响了代理人在促进CS学生对心理健康技术的意图方面的有效性。
{"title":"Participatory Design of Virtual Humans for Mental Health Support Among North American Computer Science Students: Voice, Appearance, and the Similarity-attraction Effect","authors":"P. Feijóo-García, Chase Wrenn, J. Stuart, A. G. de Siqueira, Benjamin C. Lok","doi":"10.1145/3613961","DOIUrl":"https://doi.org/10.1145/3613961","url":null,"abstract":"Virtual humans (VHs) have the potential to support mental wellness among college computer science (CS) students. However, designing effective VHs for counseling purposes requires a clear understanding of students’ demographics, backgrounds, and expectations. To this end, we conducted two user studies with 216 CS students from a major university in North America. In the first study, we explored how students co-designed VHs to support mental wellness conversations and found that the VHs’ demographics, appearance, and voice closely resembled the characteristics of their designers. In the second study, we investigated how the interplay between the VH’s appearance and voice impacted the agent’s effectiveness in promoting CS students’ intentions toward gratitude journaling. Our findings suggest that the active participation of CS students in VH design leads to the creation of agents that closely resemble their designers. Moreover, we found that the interplay between the VH’s appearance and voice impacts the agent’s effectiveness in promoting CS students’ intentions toward mental wellness techniques.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":"1 - 27"},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Human-like Biases in Convolutional Neural Networks for the Perception of Slant from Texture 关于卷积神经网络中从纹理感知倾斜的类人偏差
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-05 DOI: 10.1145/3613451
Yuanhao Wang, Qian Zhang, Celine Aubuchon, Jovan T. Kemp, F. Domini, J. Tompkin
Depth estimation is fundamental to 3D perception, and humans are known to have biased estimates of depth. This study investigates whether convolutional neural networks (CNNs) can be biased when predicting the sign of curvature and depth of surfaces of textured surfaces under different viewing conditions (field of view) and surface parameters (slant and texture irregularity). This hypothesis is drawn from the idea that texture gradients described by local neighborhoods—a cue identified in human vision literature—are also representable within convolutional neural networks. To this end, we trained both unsupervised and supervised CNN models on the renderings of slanted surfaces with random Polka dot patterns and analyzed their internal latent representations. The results show that the unsupervised models have similar prediction biases as humans across all experiments, while supervised CNN models do not exhibit similar biases. The latent spaces of the unsupervised models can be linearly separated into axes representing field of view and optical slant. For supervised models, this ability varies substantially with model architecture and the kind of supervision (continuous slant vs. sign of slant). Even though this study says nothing of any shared mechanism, these findings suggest that unsupervised CNN models can share similar predictions to the human visual system. Code: github.com/brownvc/Slant-CNN-Biases
深度估计是3D感知的基础,已知人类对深度的估计有偏差。本研究调查了卷积神经网络(CNNs)在预测不同观看条件(视场)和表面参数(倾斜和纹理不规则)下纹理表面的曲率和深度符号时是否存在偏差。这一假设源于这样一种观点,即由局部邻域描述的纹理梯度——人类视觉文献中确定的线索——也可以在卷积神经网络中表示。为此,我们在具有随机波尔卡点图案的倾斜表面的渲染上训练了无监督和有监督的CNN模型,并分析了它们的内部潜在表示。结果表明,在所有实验中,无监督模型与人类具有相似的预测偏差,而有监督的CNN模型没有表现出相似的偏差。无监督模型的潜在空间可以线性地分为表示视场和光学倾斜的轴。对于监督模型,这种能力随着模型架构和监督类型(连续倾斜与倾斜符号)的不同而有很大差异。尽管这项研究没有说明任何共享机制,但这些发现表明,无监督的CNN模型可以与人类视觉系统共享类似的预测。代码:github.com/brownvc/Slant-CNN-Biases
{"title":"On Human-like Biases in Convolutional Neural Networks for the Perception of Slant from Texture","authors":"Yuanhao Wang, Qian Zhang, Celine Aubuchon, Jovan T. Kemp, F. Domini, J. Tompkin","doi":"10.1145/3613451","DOIUrl":"https://doi.org/10.1145/3613451","url":null,"abstract":"Depth estimation is fundamental to 3D perception, and humans are known to have biased estimates of depth. This study investigates whether convolutional neural networks (CNNs) can be biased when predicting the sign of curvature and depth of surfaces of textured surfaces under different viewing conditions (field of view) and surface parameters (slant and texture irregularity). This hypothesis is drawn from the idea that texture gradients described by local neighborhoods—a cue identified in human vision literature—are also representable within convolutional neural networks. To this end, we trained both unsupervised and supervised CNN models on the renderings of slanted surfaces with random Polka dot patterns and analyzed their internal latent representations. The results show that the unsupervised models have similar prediction biases as humans across all experiments, while supervised CNN models do not exhibit similar biases. The latent spaces of the unsupervised models can be linearly separated into axes representing field of view and optical slant. For supervised models, this ability varies substantially with model architecture and the kind of supervision (continuous slant vs. sign of slant). Even though this study says nothing of any shared mechanism, these findings suggest that unsupervised CNN models can share similar predictions to the human visual system. Code: github.com/brownvc/Slant-CNN-Biases","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46819820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Perception of Mid-Air Tactile Shapes With Spatio-Temporally-Modulated Tactile Pointers 利用时空调制触觉指针提高对空中触觉形状的感知
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-29 DOI: 10.1145/3611388
Lendy Mulot, Thomas Howard, C. Pacchierotti, M. Marchal
Ultrasound mid-air haptic (UMH) devices can remotely render vibrotactile shapes on the skin of unequipped users, e.g., to draw haptic icons or render virtual object shapes. Spatio-temporal modulation (STM), the state-of-the-art UMH shape rendering method, provides large freedom in shape design and produces the strongest possible stimuli for this technology. Yet, STM shapes are often reported to be blurry, complicating shape identification. Dynamic tactile pointers (DTP) were recently introduced as a technique to overcome this issue. By tracing a contour with an amplitude-modulated focal point, they significantly improve shape identification accuracy over STM, but at the cost of much lower stimulus intensity. Building upon this, we propose Spatio-temporally-modulated Tactile Pointers (STP), a novel method for rendering clearer and sharper UMH shapes while at the same time producing strong vibrotactile sensations. We ran two human participant experiments, which show that STP shapes are perceived as significantly stronger than DTP shapes, while shape identification accuracy is significantly improved over STM and on par with that obtained with DTP. Our work has implications for effective shape rendering with UMH, and provides insights which could inform future psychophysical investigation into vibrotactile shape perception in UMH.
超声波空中触觉(UMH)设备可以远程在未装备的用户皮肤上呈现振动触觉形状,例如绘制触觉图标或呈现虚拟物体形状。时空调制(STM)是最先进的UMH形状绘制方法,它为形状设计提供了很大的自由度,并为该技术产生了最强的刺激。然而,STM形状通常是模糊的,使形状识别变得复杂。动态触觉指针(DTP)作为一种克服这一问题的技术最近被引入。通过用调幅焦点跟踪轮廓,它们显著提高了STM的形状识别精度,但代价是大大降低了刺激强度。在此基础上,我们提出了时空调制触觉指针(STP),这是一种呈现更清晰、更锐利的UMH形状,同时产生强烈振动触觉的新方法。我们进行了两个人类参与者的实验,结果表明STP形状比DTP形状明显更强,而STM的形状识别精度显著提高,与DTP的形状识别精度相当。我们的工作对UMH的有效形状渲染具有启示意义,并为UMH中振动触觉形状感知的未来心理物理研究提供了见解。
{"title":"Improving the Perception of Mid-Air Tactile Shapes With Spatio-Temporally-Modulated Tactile Pointers","authors":"Lendy Mulot, Thomas Howard, C. Pacchierotti, M. Marchal","doi":"10.1145/3611388","DOIUrl":"https://doi.org/10.1145/3611388","url":null,"abstract":"Ultrasound mid-air haptic (UMH) devices can remotely render vibrotactile shapes on the skin of unequipped users, e.g., to draw haptic icons or render virtual object shapes. Spatio-temporal modulation (STM), the state-of-the-art UMH shape rendering method, provides large freedom in shape design and produces the strongest possible stimuli for this technology. Yet, STM shapes are often reported to be blurry, complicating shape identification. Dynamic tactile pointers (DTP) were recently introduced as a technique to overcome this issue. By tracing a contour with an amplitude-modulated focal point, they significantly improve shape identification accuracy over STM, but at the cost of much lower stimulus intensity. Building upon this, we propose Spatio-temporally-modulated Tactile Pointers (STP), a novel method for rendering clearer and sharper UMH shapes while at the same time producing strong vibrotactile sensations. We ran two human participant experiments, which show that STP shapes are perceived as significantly stronger than DTP shapes, while shape identification accuracy is significantly improved over STM and on par with that obtained with DTP. Our work has implications for effective shape rendering with UMH, and provides insights which could inform future psychophysical investigation into vibrotactile shape perception in UMH.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44964635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twin identification over viewpoint change: A deep convolutional neural network surpasses humans 超越视点变化的孪生识别:深度卷积神经网络超越人类
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-20 DOI: https://dl.acm.org/doi/10.1145/3609224
Connor J. Parde, Virginia E. Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G. Cavazos, Carlos D. Castillo, Alice J. O’Toole

Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (N = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45-degree profile, and frontal to 90-degree-profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range r = 0.38 to r = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.

深度卷积神经网络(DCNNs)在人脸识别方面已经达到了人类水平的准确性(Phillips等人,2018),尽管尚不清楚它们区分高度相似的人脸的准确性。在这里,人类和DCNN进行了一项具有挑战性的面部识别匹配任务,其中包括同卵双胞胎。参与者(N = 87)观看了三种类型的成对面部图像:同一身份,一般冒名顶替者(来自相似人口群体的不同身份)和双胞胎冒名顶替者(同卵双胞胎兄弟姐妹)。他们的任务是确定这两组照片显示的是同一个人还是不同的人。在三种视点差异条件下进行身份比较测试:正面到正面,正面到45度侧面,正面到90度侧面。在每个视点视差条件下,评估了区分匹配身份对与双胞胎冒名顶替者对和一般冒名顶替者对的准确性。人类对普通的冒名顶替者比对双胞胎的冒名顶替者更准确,而且准确率随着一对图像之间视点差异的增加而下降。训练用于人脸识别的DCNN (Ranjan et al., 2018)在呈现给人类的相同图像对上进行了测试。机器的表现反映了人类的准确性模式,但在除一种情况外的所有情况下,机器的表现都达到或超过了人类。在所有图像对类型中比较人类和机器的相似性得分。这项项目水平的分析表明,在9种图像对类型中,人类和机器的相似性评级在6种类型中显著相关[范围r = 0.38至r = 0.63],表明人类对面部相似性的感知与DCNN大致一致。这些发现也有助于我们理解DCNN在识别高相似度面孔方面的表现,表明DCNN的表现达到或超过人类的水平,并表明人类和DCNN使用的特征之间存在一定程度的平等。
{"title":"Twin identification over viewpoint change: A deep convolutional neural network surpasses humans","authors":"Connor J. Parde, Virginia E. Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G. Cavazos, Carlos D. Castillo, Alice J. O’Toole","doi":"https://dl.acm.org/doi/10.1145/3609224","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3609224","url":null,"abstract":"<p>Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (<i>N</i> = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45-degree profile, and frontal to 90-degree-profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range <i>r</i> = 0.38 to <i>r</i> = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"9 2","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans. 超越视点变化的孪生识别:深度卷积神经网络超越人类
IF 1.9 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-01 DOI: 10.1145/3609224
Connor J Parde, Virginia E Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G Cavazos, Carlos D Castillo, Alice J O'Toole

Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (N = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45° profile, and frontal to 90°profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general-imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range r = 0.38 to r = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.

深度卷积神经网络(DCNN)在人脸识别方面已经达到了人类水平的准确性(Phillips et al.,2018),但尚不清楚它们区分高度相似人脸的准确性。在这里,人类和DCNN执行了一项具有挑战性的人脸身份匹配任务,其中包括同卵双胞胎。参与者(N=87)观看了三种类型的成对人脸图像:相同身份、普通冒名顶替者(来自相似人口群体的不同身份)和双胞胎冒名顶替(同卵双胞胎兄弟姐妹)。任务是确定这对情侣是同一个人还是不同的人。在三种视角差异条件下测试身份比较:正面到正面、正面到45°轮廓和正面到90°轮廓。在每个视点视差条件下,评估了从双冒名顶替者对和一般冒名顶替对中区分匹配身份对的准确性。与双胞胎冒名顶替者对相比,人类对普通冒名顶替对更准确,并且准确性随着一对图像之间视点视差的增加而下降。针对人脸识别训练的DCNN(Ranjan等人,2018)在呈现给人类的相同图像对上进行了测试。机器性能反映了人类准确性的模式,但在除一种情况外的所有情况下,其性能都达到或高于所有人类。对所有图像对类型的人机相似性得分进行比较。该项目级分析显示,在九种图像对类型中的六种图像对中,人和机器的相似性评级显著相关[范围r=0.38至r=0.63],表明人类对人脸相似性的感知与DCNN之间总体一致。这些发现也有助于我们理解DCNN在识别高相似度人脸方面的性能,证明DCNN的性能达到或高于人类的水平,并表明人类使用的特征与DCNN之间存在一定程度的对等性。
{"title":"Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans.","authors":"Connor J Parde, Virginia E Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G Cavazos, Carlos D Castillo, Alice J O'Toole","doi":"10.1145/3609224","DOIUrl":"10.1145/3609224","url":null,"abstract":"<p><p>Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (<i>N</i> = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45° profile, and frontal to 90°profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general-imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range <i>r</i> = 0.38 to <i>r</i> = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42806850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Applied Perception
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1