首页 > 最新文献

Journal of Vision最新文献

英文 中文
Flexible Relations Between Confidence and Confidence RTs in Post-Decisional Models of Confidence: A Reply to Chen and Rahnev. 决策后信心模型中信心与信心 RT 之间的灵活关系:回复 Chen 和 Rahnev。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.9
Stef Herregods, Luc Vermeylen, Kobe Desender
{"title":"Flexible Relations Between Confidence and Confidence RTs in Post-Decisional Models of Confidence: A Reply to Chen and Rahnev.","authors":"Stef Herregods, Luc Vermeylen, Kobe Desender","doi":"10.1167/jov.24.12.9","DOIUrl":"10.1167/jov.24.12.9","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11572761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the relationship between subjective perception and unconscious feature integration. 调查主观感知与无意识特征整合之间的关系
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.1
Lukas Vogelsang, Maëlan Q Menétrey, Leila Drissi-Daoudi, Michael H Herzog

Visual features need to be temporally integrated to detect motion signals and solve the many ill-posed problems of vision. It has previously been shown that such integration occurs in windows of unconscious processing of up to 450 milliseconds. However, whether features are integrated should be governed by perceptually meaningful mechanisms. Here, we expand on previous findings suggesting that subjective perception and integration may be linked. Specifically, different observers were found to group elements differently and to exhibit corresponding feature integration behavior. If the former were to influence the latter, perception would appear to not only be the outcome of integration but to potentially also be part of it. To test any such linkages more systematically, we here examined the role of one of the key perceptual grouping cues, color similarity, in the Sequential Metacontrast Paradigm (SQM). In the SQM, participants are presented with two streams of lines that are expanding from the center outwards. If several lines in the attended motion stream are offset, offsets integrate unconsciously and mandatorily for periods of up to 450 milliseconds. Across three experiments, we presented lines of varied colors. Our results reveal that individuals who perceive differently colored lines as "popping out" from the motion stream do not exhibit mandatory integration but that individuals who perceive such lines as part of an integrated motion stream do show offset integration behavior across the entire stream. These results attest to the proposed linkage between subjective perception and integration behavior in the SQM.

视觉特征需要在时间上进行整合,以检测运动信号并解决视觉中的许多问题。以前的研究表明,这种整合发生在长达 450 毫秒的无意识处理窗口中。然而,特征是否被整合应受感知机制的制约。在此,我们对之前的研究结果进行了扩展,这些研究结果表明,主观感知和整合可能存在联系。具体来说,我们发现不同的观察者会以不同的方式对元素进行分组,并表现出相应的特征整合行为。如果前者会影响后者,那么感知似乎不仅是整合的结果,也可能是整合的一部分。为了更系统地检验这种联系,我们在此研究了序列元对比范式(Sequential Metacontrast Paradigm,SQM)中关键感知分组线索之一--颜色相似性--的作用。在 SQM 中,参与者会看到两条从中心向外扩展的线条流。如果被试运动流中的几条线发生偏移,偏移会在无意识中强制整合,持续时间长达 450 毫秒。在三个实验中,我们呈现了不同颜色的线条。我们的结果表明,将不同颜色的线条视为从运动流中 "跳出 "的个体不会表现出强制性整合,但将这些线条视为整合运动流一部分的个体确实会在整个运动流中表现出偏移整合行为。这些结果证明了 SQM 中主观感知与整合行为之间的联系。
{"title":"Investigating the relationship between subjective perception and unconscious feature integration.","authors":"Lukas Vogelsang, Maëlan Q Menétrey, Leila Drissi-Daoudi, Michael H Herzog","doi":"10.1167/jov.24.12.1","DOIUrl":"10.1167/jov.24.12.1","url":null,"abstract":"<p><p>Visual features need to be temporally integrated to detect motion signals and solve the many ill-posed problems of vision. It has previously been shown that such integration occurs in windows of unconscious processing of up to 450 milliseconds. However, whether features are integrated should be governed by perceptually meaningful mechanisms. Here, we expand on previous findings suggesting that subjective perception and integration may be linked. Specifically, different observers were found to group elements differently and to exhibit corresponding feature integration behavior. If the former were to influence the latter, perception would appear to not only be the outcome of integration but to potentially also be part of it. To test any such linkages more systematically, we here examined the role of one of the key perceptual grouping cues, color similarity, in the Sequential Metacontrast Paradigm (SQM). In the SQM, participants are presented with two streams of lines that are expanding from the center outwards. If several lines in the attended motion stream are offset, offsets integrate unconsciously and mandatorily for periods of up to 450 milliseconds. Across three experiments, we presented lines of varied colors. Our results reveal that individuals who perceive differently colored lines as \"popping out\" from the motion stream do not exhibit mandatory integration but that individuals who perceive such lines as part of an integrated motion stream do show offset integration behavior across the entire stream. These results attest to the proposed linkage between subjective perception and integration behavior in the SQM.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142568724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep convolutional neural networks are sensitive to face configuration. 深度卷积神经网络对人脸结构非常敏感。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.6
Virginia E Strehle, Natalie K Bendiksen, Alice J O'Toole

Deep convolutional neural networks (DCNNs) are remarkably accurate models of human face recognition. However, less is known about whether these models generate face representations similar to those used by humans. Sensitivity to facial configuration has long been considered a marker of human perceptual expertise for faces. We tested whether DCNNs trained for face identification "perceive" alterations to facial features and their configuration. We also compared the extent to which representations changed as a function of the alteration type. Facial configuration was altered by changing the distance between the eyes or the distance between the nose and mouth. Facial features were altered by replacing the eyes or mouth with those of another face. Altered faces were processed by DCNNs (Ranjan et al., 2018; Szegedy et al., 2017) and the similarity of the generated representations was compared. Both DCNNs were sensitive to configural and feature changes-with changes to configuration altering the DCNN representations more than changes to face features. To determine whether the DCNNs' greater sensitivity to configuration was due to a priori differences in the images or characteristics of the DCNN processing, we compared the representation of features and configuration between the low-level, pixel-based representations and the DCNN-generated representations. Sensitivity to face configuration increased from the pixel-level image to the DCNN encoding, whereas the sensitivity to features did not change. The enhancement of configural information may be due to the utility of configuration for discriminating among similar faces combined with the within-category nature of face identification training.

深度卷积神经网络(DCNN)是非常精确的人类人脸识别模型。然而,人们对这些模型是否能生成类似于人类使用的人脸表征却知之甚少。长期以来,对人脸构型的敏感性一直被认为是人类对人脸的感知能力的标志。我们测试了接受过人脸识别训练的 DCNN 是否能 "感知 "面部特征及其配置的变化。我们还比较了表征随改变类型而改变的程度。通过改变眼睛之间的距离或鼻子和嘴巴之间的距离来改变面部构造。改变面部特征的方法是将眼睛或嘴巴换成另一张脸的眼睛或嘴巴。改变后的人脸由 DCNNs(Ranjan 等人,2018 年;Szegedy 等人,2017 年)处理,并比较生成的表征的相似性。两种 DCNN 对构型和特征的变化都很敏感--构型的变化对 DCNN 表征的改变比人脸特征的变化更大。为了确定 DCNN 对构型更敏感是否是由于图像的先验差异或 DCNN 处理的特点,我们比较了基于像素的低级表征和 DCNN 生成的表征之间的特征和构型表征。从像素级图像到 DCNN 编码,对人脸构型的敏感度增加了,而对特征的敏感度没有变化。构型信息的增强可能是由于构型在区分相似人脸方面的作用,以及人脸识别训练的类别内性质。
{"title":"Deep convolutional neural networks are sensitive to face configuration.","authors":"Virginia E Strehle, Natalie K Bendiksen, Alice J O'Toole","doi":"10.1167/jov.24.12.6","DOIUrl":"10.1167/jov.24.12.6","url":null,"abstract":"<p><p>Deep convolutional neural networks (DCNNs) are remarkably accurate models of human face recognition. However, less is known about whether these models generate face representations similar to those used by humans. Sensitivity to facial configuration has long been considered a marker of human perceptual expertise for faces. We tested whether DCNNs trained for face identification \"perceive\" alterations to facial features and their configuration. We also compared the extent to which representations changed as a function of the alteration type. Facial configuration was altered by changing the distance between the eyes or the distance between the nose and mouth. Facial features were altered by replacing the eyes or mouth with those of another face. Altered faces were processed by DCNNs (Ranjan et al., 2018; Szegedy et al., 2017) and the similarity of the generated representations was compared. Both DCNNs were sensitive to configural and feature changes-with changes to configuration altering the DCNN representations more than changes to face features. To determine whether the DCNNs' greater sensitivity to configuration was due to a priori differences in the images or characteristics of the DCNN processing, we compared the representation of features and configuration between the low-level, pixel-based representations and the DCNN-generated representations. Sensitivity to face configuration increased from the pixel-level image to the DCNN encoding, whereas the sensitivity to features did not change. The enhancement of configural information may be due to the utility of configuration for discriminating among similar faces combined with the within-category nature of face identification training.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542502/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ocular biometric responses to simulated polychromatic defocus. 模拟多色散焦的眼部生物测量反应。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.3
Sowmya Ravikumar, Elise N Harb, Karen E Molina, Sarah E Singh, Joel Segre, Christine F Wildsoet

Evidence from human studies of ocular accommodation and studies of animals reared in monochromatic conditions suggest that chromatic signals can guide ocular growth. We hypothesized that ocular biometric response in humans can be manipulated by simulating the chromatic contrast differences associated with imposition of optical defocus. The red, green, and blue (RGB) channels of an RGB movie of the natural world were individually incorporated with computational defocus to create two different movie stimuli. The magnitude of defocus incorporated in the red and blue layers was chosen such that, in one case, it simulated +3 D defocus, referred to as color-signed myopic (CSM) defocus, and in another case it simulated -3 D defocus, referred to as color-signed hyperopic (CSH) defocus. Seventeen subjects viewed the reference stimulus (unaltered movie) and at least one of the two color-signed defocus stimuli for ∼1 hour. Axial length (AL) and choroidal thickness (ChT) were measured immediately before and after each session. AL and subfoveal ChT showed no significant change under any of the three conditions. A significant increase in vitreous chamber depth (VCD) was observed following viewing of the CSH stimulus compared with the reference stimulus (0.034 ± 0.03 mm and 0 ± 0.02 mm, respectively; p = 0.018). A significant thinning of the crystalline lens was observed following viewing of the CSH stimulus relative to the CSM stimulus (-0.033 ± 0.03 mm and 0.001 ± 0.03 mm, respectively; p = 0.015). Differences in the effects of CSM and CSH conditions on VCD and lens thickness suggest a directional, modulatory influence of chromatic defocus. On the other hand, ChT responses showed large variability, rendering it an unreliable biomarker for chromatic defocus-driven responses, at least for the conditions of this study.

人类对眼球调节的研究以及对单色条件下饲养的动物的研究都表明,色度信号可以引导眼球的生长。我们假设,可以通过模拟与光学散焦相关的色度对比差异来操纵人类的眼部生物测量反应。我们将自然界 RGB 电影的红、绿、蓝(RGB)通道分别与计算离焦相结合,创造出两种不同的电影刺激。红色和蓝色层中的散焦幅度是这样选择的:一种情况是模拟 +3 D 散焦,称为色标近视(CSM)散焦;另一种情况是模拟 -3 D 散焦,称为色标远视(CSH)散焦。17 名受试者观看了参考刺激物(未改动的电影)和两种颜色符号离焦刺激物中的至少一种,时间为 1 小时。在每次训练前后立即测量轴长(AL)和脉络膜厚度(ChT)。在这三种条件下,AL和眼底ChT均无明显变化。与参考刺激相比,观看 CSH 刺激后观察到玻璃体腔深度(VCD)明显增加(分别为 0.034 ± 0.03 毫米和 0 ± 0.02 毫米;p = 0.018)。与 CSM 刺激相比,观看 CSH 刺激后观察到晶状体明显变薄(分别为 -0.033 ± 0.03 毫米和 0.001 ± 0.03 毫米;p = 0.015)。CSM 和 CSH 条件对 VCD 和晶状体厚度的影响差异表明,色散焦具有定向调节作用。另一方面,ChT 反应显示出很大的变异性,使其成为色散焦驱动反应的不可靠生物标记,至少在本研究的条件下是这样。
{"title":"Ocular biometric responses to simulated polychromatic defocus.","authors":"Sowmya Ravikumar, Elise N Harb, Karen E Molina, Sarah E Singh, Joel Segre, Christine F Wildsoet","doi":"10.1167/jov.24.12.3","DOIUrl":"10.1167/jov.24.12.3","url":null,"abstract":"<p><p>Evidence from human studies of ocular accommodation and studies of animals reared in monochromatic conditions suggest that chromatic signals can guide ocular growth. We hypothesized that ocular biometric response in humans can be manipulated by simulating the chromatic contrast differences associated with imposition of optical defocus. The red, green, and blue (RGB) channels of an RGB movie of the natural world were individually incorporated with computational defocus to create two different movie stimuli. The magnitude of defocus incorporated in the red and blue layers was chosen such that, in one case, it simulated +3 D defocus, referred to as color-signed myopic (CSM) defocus, and in another case it simulated -3 D defocus, referred to as color-signed hyperopic (CSH) defocus. Seventeen subjects viewed the reference stimulus (unaltered movie) and at least one of the two color-signed defocus stimuli for ∼1 hour. Axial length (AL) and choroidal thickness (ChT) were measured immediately before and after each session. AL and subfoveal ChT showed no significant change under any of the three conditions. A significant increase in vitreous chamber depth (VCD) was observed following viewing of the CSH stimulus compared with the reference stimulus (0.034 ± 0.03 mm and 0 ± 0.02 mm, respectively; p = 0.018). A significant thinning of the crystalline lens was observed following viewing of the CSH stimulus relative to the CSM stimulus (-0.033 ± 0.03 mm and 0.001 ± 0.03 mm, respectively; p = 0.015). Differences in the effects of CSM and CSH conditions on VCD and lens thickness suggest a directional, modulatory influence of chromatic defocus. On the other hand, ChT responses showed large variability, rendering it an unreliable biomarker for chromatic defocus-driven responses, at least for the conditions of this study.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How does contextual information affect aesthetic appreciation and gaze behavior in figurative and abstract artwork? 语境信息如何影响具象和抽象艺术品的审美鉴赏和凝视行为?
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.8
Soazig Casteau, Daniel T Smith

Numerous studies have investigated how providing contextual information with artwork influences gaze behavior, yet the evidence that contextually triggered changes in oculomotor behavior when exploring artworks may be linked to changes in aesthetic experience remains mixed. The aim of this study was to investigate how three levels of contextual information influenced people's aesthetic appreciation and visual exploration of both abstract and figurative art. Participants were presented with an artwork and one of three contextual information levels: a title, title plus information on the aesthetic design of the piece, or title plus information about the semantic meaning of the piece. We measured participants liking, interest and understanding of artworks and recorded exploration duration, fixation count and fixation duration on regions of interest for each piece. Contextual information produced greater aesthetic appreciation and more visual exploration in abstract artworks. In contrast, figurative artworks were highly dependent on liking preferences and less affected by contextual information. Our results suggest that the effect of contextual information on aesthetic ratings arises from an elaboration effect, such that the viewer aesthetic experience is enhanced by additional information, but only when the meaning of an artwork is not obvious.

许多研究都探讨了艺术作品的语境信息如何影响人们的注视行为,然而有证据表明,在探索艺术作品时,由语境引发的眼球运动行为的变化可能与审美体验的变化有关,但这些证据仍然参差不齐。本研究旨在探讨三个层次的情境信息如何影响人们对抽象和具象艺术品的审美鉴赏和视觉探索。研究人员向参与者展示了一件艺术品和三种语境信息中的一种:标题、标题加作品美学设计信息或标题加作品语义信息。我们测量了参与者对艺术作品的喜好、兴趣和理解程度,并记录了每件作品的探索时间、固定次数和感兴趣区域的固定时间。在抽象艺术作品中,语境信息产生了更高的审美鉴赏力和更多的视觉探索。相比之下,具象艺术作品则高度依赖于喜好偏好,受语境信息的影响较小。我们的研究结果表明,语境信息对审美评价的影响源于一种阐释效应,即观众的审美体验会因额外信息而增强,但仅限于艺术作品的意义并不明显的情况。
{"title":"How does contextual information affect aesthetic appreciation and gaze behavior in figurative and abstract artwork?","authors":"Soazig Casteau, Daniel T Smith","doi":"10.1167/jov.24.12.8","DOIUrl":"10.1167/jov.24.12.8","url":null,"abstract":"<p><p>Numerous studies have investigated how providing contextual information with artwork influences gaze behavior, yet the evidence that contextually triggered changes in oculomotor behavior when exploring artworks may be linked to changes in aesthetic experience remains mixed. The aim of this study was to investigate how three levels of contextual information influenced people's aesthetic appreciation and visual exploration of both abstract and figurative art. Participants were presented with an artwork and one of three contextual information levels: a title, title plus information on the aesthetic design of the piece, or title plus information about the semantic meaning of the piece. We measured participants liking, interest and understanding of artworks and recorded exploration duration, fixation count and fixation duration on regions of interest for each piece. Contextual information produced greater aesthetic appreciation and more visual exploration in abstract artworks. In contrast, figurative artworks were highly dependent on liking preferences and less affected by contextual information. Our results suggest that the effect of contextual information on aesthetic ratings arises from an elaboration effect, such that the viewer aesthetic experience is enhanced by additional information, but only when the meaning of an artwork is not obvious.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11552055/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions. 在正常和受损的观察条件下,听觉和视觉线索在空间导航中的整合。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.7
Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr

Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.

听觉地标有助于视觉导航过程中的空间更新。虽然在导航者如何结合听觉和视觉地标方面已经发现了巨大的个体间差异,但在什么情况下使用听觉仍不清楚。此外,个体是否能将听觉线索与视觉线索最佳地结合起来,以减少知觉的不确定性或可变性,也没有得到充分的证实。在此,我们测试了虚拟导航任务中空间更新时的视听整合。在实验 1 中,24 名感觉敏锐度正常的人完成了一项三角归航任务,任务中既有视觉地标,也有听觉地标,或者两者兼有。此外,参与者还经历了第四种隐蔽空间冲突条件,即听觉地标相对于视觉地标旋转。与听觉地标相比,受试者通常更依赖视觉地标,而且使用多感官线索的准确性并不比仅使用视觉线索高。在实验 2 中,由 24 人组成的新小组完成了同样的任务,但采用了模糊过滤器的形式来模拟低视力,以增加视觉的不确定性。同样,参与者更依赖于视觉地标而不是听觉地标,没有出现多感官益处。与使用正常视力导航的组别相比,使用模糊导航的参与者并没有更多地依赖听觉。这些结果支持了之前的研究,研究表明,即使在视力受损的条件下,一次使用一种感官模式也足以进行空间更新。未来的研究可以调查导致听觉和视觉多感官线索组合策略不同的任务和参与者特定因素。
{"title":"Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions.","authors":"Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1167/jov.24.11.7","DOIUrl":"10.1167/jov.24.11.7","url":null,"abstract":"<p><p>Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11469273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video. 视觉体验数据集:超过 200 个小时的综合眼球运动、里程测量和自我中心视频记录。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.6
Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann

We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.

我们介绍了视觉体验数据集(VEDB),该数据集汇集了 240 多个小时的自我中心视频,并结合了凝视和头部跟踪数据,为人类观察者体验视觉世界提供了前所未有的视角。该数据集由 56 名年龄从 7 岁到 46 岁的观察者记录的 717 个片段组成。本文概述了为确保样本的代表性而采取的数据收集、处理和标签协议,并讨论了数据集中潜在的误差或偏差来源。VEDB 的潜在应用领域非常广泛,包括改进凝视跟踪方法、评估时空图像统计以及完善用于场景和活动识别的深度神经网络。VEDB 可通过现有的开放科学平台访问,旨在成为一个活的数据集,并计划进行扩展和社区贡献。该数据集的发布注重伦理方面的考虑,如参与者的隐私和减少潜在的偏见。通过提供一个基于真实世界经验的数据集,并附带大量元数据和支持代码,作者邀请研究界使用 VEDB 并为其做出贡献,从而促进对自然环境中的视觉感知和行为有更丰富的了解。
{"title":"The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video.","authors":"Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann","doi":"10.1167/jov.24.11.6","DOIUrl":"10.1167/jov.24.11.6","url":null,"abstract":"<p><p>We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466363/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color-binding errors induced by modulating effects of the preceding stimulus on onset rivalry. 前一个刺激对起始竞争的调节作用所诱发的色彩结合错误
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.10
Satoru Abe, Eiji Kimura

Onset rivalry can be modulated by a preceding stimulus with features similar to rivalrous test stimuli. In this study, we used this modulating effect to investigate the integration of color and orientation during onset rivalry using equiluminant chromatic gratings. Specifically, we explored whether this modulating effect leads to a decoupling of color and orientation in chromatic gratings, resulting in a percept distinct from either of the rivalrous gratings. The results demonstrated that color-binding errors can be observed in a form where rivalrous green-gray clockwise and red-gray counterclockwise gratings yield the percept of a bichromatic, red-green grating with either clockwise or counterclockwise orientation. These errors were observed under a brief test duration (30 ms), with both monocular and binocular presentations of the preceding stimulus. The specific color and orientation combination of the preceding stimulus was not critical for inducing color-binding errors, provided it was composed of the test color and orientation. We also found a notable covariant relationship between the perception of color-binding errors and exclusive dominance, where the perceived orientation in color-binding errors generally matched that in exclusive dominance. This finding suggests that the mechanisms underlying color-binding errors may be related to, or partially overlap with, those determining exclusive dominance. These errors can be explained by the decoupling of color and orientation in the representation of the suppressed grating, with the color binding to the dominant grating, resulting in an erroneously perceived bichromatic grating.

前一个刺激物的特征与竞争性测试刺激物相似,可以调节起始竞争。在本研究中,我们利用这种调节效应,使用等亮度色光光栅研究了在起始竞争中颜色和方向的整合。具体来说,我们探讨了这种调节效应是否会导致色光光栅中颜色和方位的解耦,从而产生一种不同于竞争光栅的知觉。结果表明,当顺时针方向的绿灰光栅和逆时针方向的红灰光栅产生双色、顺时针或逆时针方向的红绿光栅时,可以观察到颜色结合错误。这些错误是在短暂的测试时间(30 毫秒)内,通过单眼和双眼呈现前一个刺激的情况下观察到的。前一刺激的特定颜色和方向组合对于诱发颜色结合错误并不重要,前提是它必须由测试颜色和方向组成。我们还发现,色彩结合错误的感知与排他性优势之间存在明显的共变关系,即色彩结合错误中的感知方位通常与排他性优势中的感知方位一致。这一发现表明,颜色结合错误的基本机制可能与决定排他性优势的机制相关或部分重叠。这些错误可以解释为:在被抑制光栅的表征中,颜色和方向脱钩,颜色与优势光栅结合,导致错误地感知到双色光栅。
{"title":"Color-binding errors induced by modulating effects of the preceding stimulus on onset rivalry.","authors":"Satoru Abe, Eiji Kimura","doi":"10.1167/jov.24.11.10","DOIUrl":"10.1167/jov.24.11.10","url":null,"abstract":"<p><p>Onset rivalry can be modulated by a preceding stimulus with features similar to rivalrous test stimuli. In this study, we used this modulating effect to investigate the integration of color and orientation during onset rivalry using equiluminant chromatic gratings. Specifically, we explored whether this modulating effect leads to a decoupling of color and orientation in chromatic gratings, resulting in a percept distinct from either of the rivalrous gratings. The results demonstrated that color-binding errors can be observed in a form where rivalrous green-gray clockwise and red-gray counterclockwise gratings yield the percept of a bichromatic, red-green grating with either clockwise or counterclockwise orientation. These errors were observed under a brief test duration (30 ms), with both monocular and binocular presentations of the preceding stimulus. The specific color and orientation combination of the preceding stimulus was not critical for inducing color-binding errors, provided it was composed of the test color and orientation. We also found a notable covariant relationship between the perception of color-binding errors and exclusive dominance, where the perceived orientation in color-binding errors generally matched that in exclusive dominance. This finding suggests that the mechanisms underlying color-binding errors may be related to, or partially overlap with, those determining exclusive dominance. These errors can be explained by the decoupling of color and orientation in the representation of the suppressed grating, with the color binding to the dominant grating, resulting in an erroneously perceived bichromatic grating.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472883/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microsaccadic suppression of peripheral perceptual detection performance as a function of foveated visual image appearance. 外围知觉检测性能的微弛豫抑制是眼窝视觉图像外观的函数。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.3
Julia Greilich, Matthias P Baumann, Ziad M Hafed

Microsaccades are known to be associated with a deficit in perceptual detection performance for brief probe flashes presented in their temporal vicinity. However, it is still not clear how such a deficit might depend on the visual environment across which microsaccades are generated. Here, and motivated by studies demonstrating an interaction between visual background image appearance and perceptual suppression strength associated with large saccades, we probed peripheral perceptual detection performance of human subjects while they generated microsaccades over three different visual backgrounds. Subjects fixated near the center of a low spatial frequency grating, a high spatial frequency grating, or a small white fixation spot over an otherwise gray background. When a computer process detected a microsaccade, it presented a brief peripheral probe flash at one of four locations (over a uniform gray background) and at different times. After collecting full psychometric curves, we found that both perceptual detection thresholds and slopes of psychometric curves were impaired for peripheral flashes in the immediate temporal vicinity of microsaccades, and they recovered with later flash times. Importantly, the threshold elevations, but not the psychometric slope reductions, were stronger for the white fixation spot than for either of the two gratings. Thus, like with larger saccades, microsaccadic suppression strength can show a certain degree of image dependence. However, unlike with larger saccades, stronger microsaccadic suppression did not occur with low spatial frequency textures. This observation might reflect the different spatiotemporal retinal transients associated with the small microsaccades in our study versus larger saccades.

众所周知,微注视与在其时间附近出现的短暂探测闪光的知觉检测性能缺陷有关。然而,这种缺陷如何取决于产生微闪烁的视觉环境仍不清楚。在此,受视觉背景图像外观和与大盲动相关的知觉抑制强度之间相互作用的研究启发,我们探测了人类受试者在三种不同的视觉背景下产生微注视时的外围知觉检测性能。受试者将视线固定在低空间频率光栅、高空间频率光栅或其他灰色背景上的白色小固定点的中心附近。当计算机程序检测到微停顿时,它会在四个位置之一(统一的灰色背景)和不同的时间显示短暂的外围探针闪光。在收集了完整的心理测量曲线后,我们发现在微停顿发生的时间附近,外围闪光的知觉检测阈值和心理测量曲线斜率都会受到影响,而随着闪光时间的延长,它们又会恢复。重要的是,白色固定点的阈值升高比两个光栅的阈值升高更明显,但心理测量斜率的降低却不明显。因此,与较大的囊回一样,微囊回抑制强度也会表现出一定程度的图像依赖性。然而,与较大的囊状移动不同,低空间频率纹理不会产生更强的微注视抑制。这一观察结果可能反映了我们的研究中与小的微注视相关的视网膜时空瞬态与大的注视相关的视网膜时空瞬态不同。
{"title":"Microsaccadic suppression of peripheral perceptual detection performance as a function of foveated visual image appearance.","authors":"Julia Greilich, Matthias P Baumann, Ziad M Hafed","doi":"10.1167/jov.24.11.3","DOIUrl":"10.1167/jov.24.11.3","url":null,"abstract":"<p><p>Microsaccades are known to be associated with a deficit in perceptual detection performance for brief probe flashes presented in their temporal vicinity. However, it is still not clear how such a deficit might depend on the visual environment across which microsaccades are generated. Here, and motivated by studies demonstrating an interaction between visual background image appearance and perceptual suppression strength associated with large saccades, we probed peripheral perceptual detection performance of human subjects while they generated microsaccades over three different visual backgrounds. Subjects fixated near the center of a low spatial frequency grating, a high spatial frequency grating, or a small white fixation spot over an otherwise gray background. When a computer process detected a microsaccade, it presented a brief peripheral probe flash at one of four locations (over a uniform gray background) and at different times. After collecting full psychometric curves, we found that both perceptual detection thresholds and slopes of psychometric curves were impaired for peripheral flashes in the immediate temporal vicinity of microsaccades, and they recovered with later flash times. Importantly, the threshold elevations, but not the psychometric slope reductions, were stronger for the white fixation spot than for either of the two gratings. Thus, like with larger saccades, microsaccadic suppression strength can show a certain degree of image dependence. However, unlike with larger saccades, stronger microsaccadic suppression did not occur with low spatial frequency textures. This observation might reflect the different spatiotemporal retinal transients associated with the small microsaccades in our study versus larger saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deconstructing the frame effect. 解构框架效应
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.8
Mohammad Shams, Peter J Kohler, Patrick Cavanagh

The perception of an object's location is profoundly influenced by the surrounding dynamics. This is dramatically demonstrated by the frame effect, where a moving frame induces substantial shifts in the perceived location of objects that flash within it. In this study, we examined the elements contributing to the large magnitude of this effect. Across three experiments, we manipulated the number of probes, the dynamics of the frame, and the spatiotemporal relationships between probes and the frame. We found that the presence of multiple probes amplified the position shift, whereas the accumulation of the frame effect over repeated motion cycles was minimal. Notably, an oscillating frame generated more pronounced effects compared to a unidirectional moving frame. Furthermore, the spatiotemporal distance between the frame and the probe played pivotal roles, with larger shifts observed near the leading edge of the frame. Interestingly, although larger frames produced stronger position shifts, the maximum shift occurred almost at the same distance relative to the frame's center across all tested sizes. Our findings suggest that the number of probes, frame size, relative probe-frame distance, and frame dynamics collectively contribute to the magnitude of the position shift.

对物体位置的感知深受周围动态环境的影响。框架效应就很好地证明了这一点。在框架效应中,移动的框架会导致在框架内闪烁的物体的感知位置发生显著变化。在这项研究中,我们研究了导致这种效应产生巨大影响的因素。在三个实验中,我们对探针的数量、框架的动态以及探针和框架之间的时空关系进行了操作。我们发现,多个探针的存在放大了位置偏移,而框架效应在重复运动周期中的累积则微乎其微。值得注意的是,与单向运动的框架相比,摆动的框架产生的效应更为明显。此外,框架与探针之间的时空距离也起着关键作用,在靠近框架前缘的地方观察到的位置偏移更大。有趣的是,虽然较大的框架会产生较强的位置移动,但在所有测试尺寸的框架中,最大移动几乎都发生在相对于框架中心的相同距离上。我们的研究结果表明,探针的数量、框架的大小、探针与框架的相对距离以及框架的动态共同影响了位置移动的幅度。
{"title":"Deconstructing the frame effect.","authors":"Mohammad Shams, Peter J Kohler, Patrick Cavanagh","doi":"10.1167/jov.24.11.8","DOIUrl":"10.1167/jov.24.11.8","url":null,"abstract":"<p><p>The perception of an object's location is profoundly influenced by the surrounding dynamics. This is dramatically demonstrated by the frame effect, where a moving frame induces substantial shifts in the perceived location of objects that flash within it. In this study, we examined the elements contributing to the large magnitude of this effect. Across three experiments, we manipulated the number of probes, the dynamics of the frame, and the spatiotemporal relationships between probes and the frame. We found that the presence of multiple probes amplified the position shift, whereas the accumulation of the frame effect over repeated motion cycles was minimal. Notably, an oscillating frame generated more pronounced effects compared to a unidirectional moving frame. Furthermore, the spatiotemporal distance between the frame and the probe played pivotal roles, with larger shifts observed near the leading edge of the frame. Interestingly, although larger frames produced stronger position shifts, the maximum shift occurred almost at the same distance relative to the frame's center across all tested sizes. Our findings suggest that the number of probes, frame size, relative probe-frame distance, and frame dynamics collectively contribute to the magnitude of the position shift.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1