首页 > 最新文献

Journal of Vision最新文献

英文 中文
How does contextual information affect aesthetic appreciation and gaze behavior in figurative and abstract artwork? 语境信息如何影响具象和抽象艺术品的审美鉴赏和凝视行为?
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.8
Soazig Casteau, Daniel T Smith

Numerous studies have investigated how providing contextual information with artwork influences gaze behavior, yet the evidence that contextually triggered changes in oculomotor behavior when exploring artworks may be linked to changes in aesthetic experience remains mixed. The aim of this study was to investigate how three levels of contextual information influenced people's aesthetic appreciation and visual exploration of both abstract and figurative art. Participants were presented with an artwork and one of three contextual information levels: a title, title plus information on the aesthetic design of the piece, or title plus information about the semantic meaning of the piece. We measured participants liking, interest and understanding of artworks and recorded exploration duration, fixation count and fixation duration on regions of interest for each piece. Contextual information produced greater aesthetic appreciation and more visual exploration in abstract artworks. In contrast, figurative artworks were highly dependent on liking preferences and less affected by contextual information. Our results suggest that the effect of contextual information on aesthetic ratings arises from an elaboration effect, such that the viewer aesthetic experience is enhanced by additional information, but only when the meaning of an artwork is not obvious.

许多研究都探讨了艺术作品的语境信息如何影响人们的注视行为,然而有证据表明,在探索艺术作品时,由语境引发的眼球运动行为的变化可能与审美体验的变化有关,但这些证据仍然参差不齐。本研究旨在探讨三个层次的情境信息如何影响人们对抽象和具象艺术品的审美鉴赏和视觉探索。研究人员向参与者展示了一件艺术品和三种语境信息中的一种:标题、标题加作品美学设计信息或标题加作品语义信息。我们测量了参与者对艺术作品的喜好、兴趣和理解程度,并记录了每件作品的探索时间、固定次数和感兴趣区域的固定时间。在抽象艺术作品中,语境信息产生了更高的审美鉴赏力和更多的视觉探索。相比之下,具象艺术作品则高度依赖于喜好偏好,受语境信息的影响较小。我们的研究结果表明,语境信息对审美评价的影响源于一种阐释效应,即观众的审美体验会因额外信息而增强,但仅限于艺术作品的意义并不明显的情况。
{"title":"How does contextual information affect aesthetic appreciation and gaze behavior in figurative and abstract artwork?","authors":"Soazig Casteau, Daniel T Smith","doi":"10.1167/jov.24.12.8","DOIUrl":"10.1167/jov.24.12.8","url":null,"abstract":"<p><p>Numerous studies have investigated how providing contextual information with artwork influences gaze behavior, yet the evidence that contextually triggered changes in oculomotor behavior when exploring artworks may be linked to changes in aesthetic experience remains mixed. The aim of this study was to investigate how three levels of contextual information influenced people's aesthetic appreciation and visual exploration of both abstract and figurative art. Participants were presented with an artwork and one of three contextual information levels: a title, title plus information on the aesthetic design of the piece, or title plus information about the semantic meaning of the piece. We measured participants liking, interest and understanding of artworks and recorded exploration duration, fixation count and fixation duration on regions of interest for each piece. Contextual information produced greater aesthetic appreciation and more visual exploration in abstract artworks. In contrast, figurative artworks were highly dependent on liking preferences and less affected by contextual information. Our results suggest that the effect of contextual information on aesthetic ratings arises from an elaboration effect, such that the viewer aesthetic experience is enhanced by additional information, but only when the meaning of an artwork is not obvious.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11552055/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions. 在正常和受损的观察条件下,听觉和视觉线索在空间导航中的整合。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.7
Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr

Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.

听觉地标有助于视觉导航过程中的空间更新。虽然在导航者如何结合听觉和视觉地标方面已经发现了巨大的个体间差异,但在什么情况下使用听觉仍不清楚。此外,个体是否能将听觉线索与视觉线索最佳地结合起来,以减少知觉的不确定性或可变性,也没有得到充分的证实。在此,我们测试了虚拟导航任务中空间更新时的视听整合。在实验 1 中,24 名感觉敏锐度正常的人完成了一项三角归航任务,任务中既有视觉地标,也有听觉地标,或者两者兼有。此外,参与者还经历了第四种隐蔽空间冲突条件,即听觉地标相对于视觉地标旋转。与听觉地标相比,受试者通常更依赖视觉地标,而且使用多感官线索的准确性并不比仅使用视觉线索高。在实验 2 中,由 24 人组成的新小组完成了同样的任务,但采用了模糊过滤器的形式来模拟低视力,以增加视觉的不确定性。同样,参与者更依赖于视觉地标而不是听觉地标,没有出现多感官益处。与使用正常视力导航的组别相比,使用模糊导航的参与者并没有更多地依赖听觉。这些结果支持了之前的研究,研究表明,即使在视力受损的条件下,一次使用一种感官模式也足以进行空间更新。未来的研究可以调查导致听觉和视觉多感官线索组合策略不同的任务和参与者特定因素。
{"title":"Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions.","authors":"Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1167/jov.24.11.7","DOIUrl":"10.1167/jov.24.11.7","url":null,"abstract":"<p><p>Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11469273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video. 视觉体验数据集:超过 200 个小时的综合眼球运动、里程测量和自我中心视频记录。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.6
Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann

We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.

我们介绍了视觉体验数据集(VEDB),该数据集汇集了 240 多个小时的自我中心视频,并结合了凝视和头部跟踪数据,为人类观察者体验视觉世界提供了前所未有的视角。该数据集由 56 名年龄从 7 岁到 46 岁的观察者记录的 717 个片段组成。本文概述了为确保样本的代表性而采取的数据收集、处理和标签协议,并讨论了数据集中潜在的误差或偏差来源。VEDB 的潜在应用领域非常广泛,包括改进凝视跟踪方法、评估时空图像统计以及完善用于场景和活动识别的深度神经网络。VEDB 可通过现有的开放科学平台访问,旨在成为一个活的数据集,并计划进行扩展和社区贡献。该数据集的发布注重伦理方面的考虑,如参与者的隐私和减少潜在的偏见。通过提供一个基于真实世界经验的数据集,并附带大量元数据和支持代码,作者邀请研究界使用 VEDB 并为其做出贡献,从而促进对自然环境中的视觉感知和行为有更丰富的了解。
{"title":"The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video.","authors":"Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann","doi":"10.1167/jov.24.11.6","DOIUrl":"10.1167/jov.24.11.6","url":null,"abstract":"<p><p>We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466363/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color-binding errors induced by modulating effects of the preceding stimulus on onset rivalry. 前一个刺激对起始竞争的调节作用所诱发的色彩结合错误
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.10
Satoru Abe, Eiji Kimura

Onset rivalry can be modulated by a preceding stimulus with features similar to rivalrous test stimuli. In this study, we used this modulating effect to investigate the integration of color and orientation during onset rivalry using equiluminant chromatic gratings. Specifically, we explored whether this modulating effect leads to a decoupling of color and orientation in chromatic gratings, resulting in a percept distinct from either of the rivalrous gratings. The results demonstrated that color-binding errors can be observed in a form where rivalrous green-gray clockwise and red-gray counterclockwise gratings yield the percept of a bichromatic, red-green grating with either clockwise or counterclockwise orientation. These errors were observed under a brief test duration (30 ms), with both monocular and binocular presentations of the preceding stimulus. The specific color and orientation combination of the preceding stimulus was not critical for inducing color-binding errors, provided it was composed of the test color and orientation. We also found a notable covariant relationship between the perception of color-binding errors and exclusive dominance, where the perceived orientation in color-binding errors generally matched that in exclusive dominance. This finding suggests that the mechanisms underlying color-binding errors may be related to, or partially overlap with, those determining exclusive dominance. These errors can be explained by the decoupling of color and orientation in the representation of the suppressed grating, with the color binding to the dominant grating, resulting in an erroneously perceived bichromatic grating.

前一个刺激物的特征与竞争性测试刺激物相似,可以调节起始竞争。在本研究中,我们利用这种调节效应,使用等亮度色光光栅研究了在起始竞争中颜色和方向的整合。具体来说,我们探讨了这种调节效应是否会导致色光光栅中颜色和方位的解耦,从而产生一种不同于竞争光栅的知觉。结果表明,当顺时针方向的绿灰光栅和逆时针方向的红灰光栅产生双色、顺时针或逆时针方向的红绿光栅时,可以观察到颜色结合错误。这些错误是在短暂的测试时间(30 毫秒)内,通过单眼和双眼呈现前一个刺激的情况下观察到的。前一刺激的特定颜色和方向组合对于诱发颜色结合错误并不重要,前提是它必须由测试颜色和方向组成。我们还发现,色彩结合错误的感知与排他性优势之间存在明显的共变关系,即色彩结合错误中的感知方位通常与排他性优势中的感知方位一致。这一发现表明,颜色结合错误的基本机制可能与决定排他性优势的机制相关或部分重叠。这些错误可以解释为:在被抑制光栅的表征中,颜色和方向脱钩,颜色与优势光栅结合,导致错误地感知到双色光栅。
{"title":"Color-binding errors induced by modulating effects of the preceding stimulus on onset rivalry.","authors":"Satoru Abe, Eiji Kimura","doi":"10.1167/jov.24.11.10","DOIUrl":"10.1167/jov.24.11.10","url":null,"abstract":"<p><p>Onset rivalry can be modulated by a preceding stimulus with features similar to rivalrous test stimuli. In this study, we used this modulating effect to investigate the integration of color and orientation during onset rivalry using equiluminant chromatic gratings. Specifically, we explored whether this modulating effect leads to a decoupling of color and orientation in chromatic gratings, resulting in a percept distinct from either of the rivalrous gratings. The results demonstrated that color-binding errors can be observed in a form where rivalrous green-gray clockwise and red-gray counterclockwise gratings yield the percept of a bichromatic, red-green grating with either clockwise or counterclockwise orientation. These errors were observed under a brief test duration (30 ms), with both monocular and binocular presentations of the preceding stimulus. The specific color and orientation combination of the preceding stimulus was not critical for inducing color-binding errors, provided it was composed of the test color and orientation. We also found a notable covariant relationship between the perception of color-binding errors and exclusive dominance, where the perceived orientation in color-binding errors generally matched that in exclusive dominance. This finding suggests that the mechanisms underlying color-binding errors may be related to, or partially overlap with, those determining exclusive dominance. These errors can be explained by the decoupling of color and orientation in the representation of the suppressed grating, with the color binding to the dominant grating, resulting in an erroneously perceived bichromatic grating.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472883/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microsaccadic suppression of peripheral perceptual detection performance as a function of foveated visual image appearance. 外围知觉检测性能的微弛豫抑制是眼窝视觉图像外观的函数。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.3
Julia Greilich, Matthias P Baumann, Ziad M Hafed

Microsaccades are known to be associated with a deficit in perceptual detection performance for brief probe flashes presented in their temporal vicinity. However, it is still not clear how such a deficit might depend on the visual environment across which microsaccades are generated. Here, and motivated by studies demonstrating an interaction between visual background image appearance and perceptual suppression strength associated with large saccades, we probed peripheral perceptual detection performance of human subjects while they generated microsaccades over three different visual backgrounds. Subjects fixated near the center of a low spatial frequency grating, a high spatial frequency grating, or a small white fixation spot over an otherwise gray background. When a computer process detected a microsaccade, it presented a brief peripheral probe flash at one of four locations (over a uniform gray background) and at different times. After collecting full psychometric curves, we found that both perceptual detection thresholds and slopes of psychometric curves were impaired for peripheral flashes in the immediate temporal vicinity of microsaccades, and they recovered with later flash times. Importantly, the threshold elevations, but not the psychometric slope reductions, were stronger for the white fixation spot than for either of the two gratings. Thus, like with larger saccades, microsaccadic suppression strength can show a certain degree of image dependence. However, unlike with larger saccades, stronger microsaccadic suppression did not occur with low spatial frequency textures. This observation might reflect the different spatiotemporal retinal transients associated with the small microsaccades in our study versus larger saccades.

众所周知,微注视与在其时间附近出现的短暂探测闪光的知觉检测性能缺陷有关。然而,这种缺陷如何取决于产生微闪烁的视觉环境仍不清楚。在此,受视觉背景图像外观和与大盲动相关的知觉抑制强度之间相互作用的研究启发,我们探测了人类受试者在三种不同的视觉背景下产生微注视时的外围知觉检测性能。受试者将视线固定在低空间频率光栅、高空间频率光栅或其他灰色背景上的白色小固定点的中心附近。当计算机程序检测到微停顿时,它会在四个位置之一(统一的灰色背景)和不同的时间显示短暂的外围探针闪光。在收集了完整的心理测量曲线后,我们发现在微停顿发生的时间附近,外围闪光的知觉检测阈值和心理测量曲线斜率都会受到影响,而随着闪光时间的延长,它们又会恢复。重要的是,白色固定点的阈值升高比两个光栅的阈值升高更明显,但心理测量斜率的降低却不明显。因此,与较大的囊回一样,微囊回抑制强度也会表现出一定程度的图像依赖性。然而,与较大的囊状移动不同,低空间频率纹理不会产生更强的微注视抑制。这一观察结果可能反映了我们的研究中与小的微注视相关的视网膜时空瞬态与大的注视相关的视网膜时空瞬态不同。
{"title":"Microsaccadic suppression of peripheral perceptual detection performance as a function of foveated visual image appearance.","authors":"Julia Greilich, Matthias P Baumann, Ziad M Hafed","doi":"10.1167/jov.24.11.3","DOIUrl":"10.1167/jov.24.11.3","url":null,"abstract":"<p><p>Microsaccades are known to be associated with a deficit in perceptual detection performance for brief probe flashes presented in their temporal vicinity. However, it is still not clear how such a deficit might depend on the visual environment across which microsaccades are generated. Here, and motivated by studies demonstrating an interaction between visual background image appearance and perceptual suppression strength associated with large saccades, we probed peripheral perceptual detection performance of human subjects while they generated microsaccades over three different visual backgrounds. Subjects fixated near the center of a low spatial frequency grating, a high spatial frequency grating, or a small white fixation spot over an otherwise gray background. When a computer process detected a microsaccade, it presented a brief peripheral probe flash at one of four locations (over a uniform gray background) and at different times. After collecting full psychometric curves, we found that both perceptual detection thresholds and slopes of psychometric curves were impaired for peripheral flashes in the immediate temporal vicinity of microsaccades, and they recovered with later flash times. Importantly, the threshold elevations, but not the psychometric slope reductions, were stronger for the white fixation spot than for either of the two gratings. Thus, like with larger saccades, microsaccadic suppression strength can show a certain degree of image dependence. However, unlike with larger saccades, stronger microsaccadic suppression did not occur with low spatial frequency textures. This observation might reflect the different spatiotemporal retinal transients associated with the small microsaccades in our study versus larger saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deconstructing the frame effect. 解构框架效应
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.8
Mohammad Shams, Peter J Kohler, Patrick Cavanagh

The perception of an object's location is profoundly influenced by the surrounding dynamics. This is dramatically demonstrated by the frame effect, where a moving frame induces substantial shifts in the perceived location of objects that flash within it. In this study, we examined the elements contributing to the large magnitude of this effect. Across three experiments, we manipulated the number of probes, the dynamics of the frame, and the spatiotemporal relationships between probes and the frame. We found that the presence of multiple probes amplified the position shift, whereas the accumulation of the frame effect over repeated motion cycles was minimal. Notably, an oscillating frame generated more pronounced effects compared to a unidirectional moving frame. Furthermore, the spatiotemporal distance between the frame and the probe played pivotal roles, with larger shifts observed near the leading edge of the frame. Interestingly, although larger frames produced stronger position shifts, the maximum shift occurred almost at the same distance relative to the frame's center across all tested sizes. Our findings suggest that the number of probes, frame size, relative probe-frame distance, and frame dynamics collectively contribute to the magnitude of the position shift.

对物体位置的感知深受周围动态环境的影响。框架效应就很好地证明了这一点。在框架效应中,移动的框架会导致在框架内闪烁的物体的感知位置发生显著变化。在这项研究中,我们研究了导致这种效应产生巨大影响的因素。在三个实验中,我们对探针的数量、框架的动态以及探针和框架之间的时空关系进行了操作。我们发现,多个探针的存在放大了位置偏移,而框架效应在重复运动周期中的累积则微乎其微。值得注意的是,与单向运动的框架相比,摆动的框架产生的效应更为明显。此外,框架与探针之间的时空距离也起着关键作用,在靠近框架前缘的地方观察到的位置偏移更大。有趣的是,虽然较大的框架会产生较强的位置移动,但在所有测试尺寸的框架中,最大移动几乎都发生在相对于框架中心的相同距离上。我们的研究结果表明,探针的数量、框架的大小、探针与框架的相对距离以及框架的动态共同影响了位置移动的幅度。
{"title":"Deconstructing the frame effect.","authors":"Mohammad Shams, Peter J Kohler, Patrick Cavanagh","doi":"10.1167/jov.24.11.8","DOIUrl":"10.1167/jov.24.11.8","url":null,"abstract":"<p><p>The perception of an object's location is profoundly influenced by the surrounding dynamics. This is dramatically demonstrated by the frame effect, where a moving frame induces substantial shifts in the perceived location of objects that flash within it. In this study, we examined the elements contributing to the large magnitude of this effect. Across three experiments, we manipulated the number of probes, the dynamics of the frame, and the spatiotemporal relationships between probes and the frame. We found that the presence of multiple probes amplified the position shift, whereas the accumulation of the frame effect over repeated motion cycles was minimal. Notably, an oscillating frame generated more pronounced effects compared to a unidirectional moving frame. Furthermore, the spatiotemporal distance between the frame and the probe played pivotal roles, with larger shifts observed near the leading edge of the frame. Interestingly, although larger frames produced stronger position shifts, the maximum shift occurred almost at the same distance relative to the frame's center across all tested sizes. Our findings suggest that the number of probes, frame size, relative probe-frame distance, and frame dynamics collectively contribute to the magnitude of the position shift.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implied occlusion and subset underestimation contribute to the weak-outnumber-strong numerosity illusion. 隐含遮挡和子集低估是造成 "数字弱-数字强 "错觉的原因。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.14
Eliana G Dellinger, Katelyn M Becker, Frank H Durgin

Four experimental studies are reported using a total of 712 participants to investigate the basis of a recently reported numerosity illusion called "weak-outnumber-strong" (WOS). In the weak-outnumber-strong illusion, when equal numbers of white and gray dots (e.g., 50 of each) are intermixed against a darker gray background, the gray dots seem much more numerous than the white. Two principles seem to be supported by these new results: 1) Subsets of mixtures are generally underestimated; thus, in mixtures of red and green dots, both sets are underestimated (using a matching task) just as the white dots are in the weak-outnumber-strong illusion, but 2) the gray dots seem to be filled in as if partially occluded by the brighter white dots. This second principle is supported by manipulations of depth perception both by pictorial cues (partial occlusion) and by binocular cues (stereopsis), such that the illusion is abolished when the gray dots are depicted as closer than the white dots, but remains strong when they are depicted as lying behind the white dots. Finally, an online investigation of a prior false-floor hypothesis concerning the effect suggests that manipulations of relative contrast may affect the segmentation process, which produces the visual bias known as subset underestimation.

本文报告了四项实验研究,共使用了 712 名参与者,以调查最近报道的一种被称为 "弱-数-强"(WOS)的数字错觉的基础。在 "弱-数-强 "错觉中,当数量相等的白点和灰点(如各 50 个)混合在深灰色背景中时,灰点看起来比白点多得多。这些新结果似乎证明了两个原理:1)混合物的子集通常会被低估;因此,在红点和绿点的混合物中,两个子集都会被低估(使用匹配任务),就像白点在弱-数-强错觉中被低估一样;但是 2)灰点似乎被填满了,就像被更亮的白点部分遮住了一样。通过图像线索(部分遮挡)和双眼线索(立体视)对深度知觉进行处理,第二个原理得到了支持,当灰点被描绘成比白点更近时,错觉就会消失,但当灰点被描绘成位于白点后面时,错觉仍然很强烈。最后,通过对之前有关该效应的假底线假设进行在线调查,发现对相对对比度的操作可能会影响分割过程,从而产生被称为 "子集低估 "的视觉偏差。
{"title":"Implied occlusion and subset underestimation contribute to the weak-outnumber-strong numerosity illusion.","authors":"Eliana G Dellinger, Katelyn M Becker, Frank H Durgin","doi":"10.1167/jov.24.11.14","DOIUrl":"10.1167/jov.24.11.14","url":null,"abstract":"<p><p>Four experimental studies are reported using a total of 712 participants to investigate the basis of a recently reported numerosity illusion called \"weak-outnumber-strong\" (WOS). In the weak-outnumber-strong illusion, when equal numbers of white and gray dots (e.g., 50 of each) are intermixed against a darker gray background, the gray dots seem much more numerous than the white. Two principles seem to be supported by these new results: 1) Subsets of mixtures are generally underestimated; thus, in mixtures of red and green dots, both sets are underestimated (using a matching task) just as the white dots are in the weak-outnumber-strong illusion, but 2) the gray dots seem to be filled in as if partially occluded by the brighter white dots. This second principle is supported by manipulations of depth perception both by pictorial cues (partial occlusion) and by binocular cues (stereopsis), such that the illusion is abolished when the gray dots are depicted as closer than the white dots, but remains strong when they are depicted as lying behind the white dots. Finally, an online investigation of a prior false-floor hypothesis concerning the effect suggests that manipulations of relative contrast may affect the segmentation process, which produces the visual bias known as subset underestimation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498648/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Serial dependencies for externally and self-generated stimuli. 外部刺激和自身刺激的序列依赖性。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.1
Clara Fritz, Antonella Pomè, Eckart Zimmermann

Our senses are constantly exposed to external stimulation. Part of the sensory stimulation is produced by our own movement, like visual motion on the retina or tactile sensations from touch. Sensations caused by our movements appear attenuated. The interpretation of current stimuli is influenced by previous experiences, known as serial dependencies. Here we investigated how sensory attenuation and serial dependencies interact. In Experiment 1, we showed that temporal predictability causes sensory attenuation. In Experiment 2, we isolated temporal predictability in a visuospatial localization task. Attenuated stimuli are influenced by serial dependencies. However, the magnitude of the serial dependence effects varies, with greater effects when the certainty of the previous trial is equal to or greater than the current one. Experiment 3 examined sensory attenuation's influence on serial dependencies. Participants localized a briefly flashed stimulus after pressing a button (self-generated) or without pressing a button (externally generated). Stronger serial dependencies occurred in self-generated trials compared to externally generated ones when presented alternately but not when presented in blocks. We conclude that the relative uncertainty in stimulation between trials determines serial dependency strengths.

我们的感官经常受到外界刺激。部分感官刺激是由我们自身的运动产生的,例如视网膜上的视觉运动或触摸产生的触觉。由我们的运动引起的感觉会被削弱。对当前刺激的解释会受到先前经验的影响,这就是所谓的序列依赖性。在这里,我们研究了感觉衰减和序列依赖是如何相互作用的。在实验 1 中,我们发现时间可预测性会导致感觉衰减。在实验 2 中,我们在视觉空间定位任务中分离出了时间可预测性。衰减的刺激会受到序列依赖的影响。然而,序列依赖性影响的程度是不同的,当前一次试验的确定性等于或大于当前试验时,影响程度更大。实验 3 考察了感觉衰减对序列依赖的影响。参与者在按下按钮(自我产生)或不按按钮(外部产生)后对短暂闪烁的刺激进行定位。在交替呈现的情况下,自我生成的试验与外部生成的试验相比具有更强的序列依赖性,而在分块呈现的情况下则没有这种依赖性。我们的结论是,试验之间刺激的相对不确定性决定了序列依赖性的强度。
{"title":"Serial dependencies for externally and self-generated stimuli.","authors":"Clara Fritz, Antonella Pomè, Eckart Zimmermann","doi":"10.1167/jov.24.11.1","DOIUrl":"10.1167/jov.24.11.1","url":null,"abstract":"<p><p>Our senses are constantly exposed to external stimulation. Part of the sensory stimulation is produced by our own movement, like visual motion on the retina or tactile sensations from touch. Sensations caused by our movements appear attenuated. The interpretation of current stimuli is influenced by previous experiences, known as serial dependencies. Here we investigated how sensory attenuation and serial dependencies interact. In Experiment 1, we showed that temporal predictability causes sensory attenuation. In Experiment 2, we isolated temporal predictability in a visuospatial localization task. Attenuated stimuli are influenced by serial dependencies. However, the magnitude of the serial dependence effects varies, with greater effects when the certainty of the previous trial is equal to or greater than the current one. Experiment 3 examined sensory attenuation's influence on serial dependencies. Participants localized a briefly flashed stimulus after pressing a button (self-generated) or without pressing a button (externally generated). Stronger serial dependencies occurred in self-generated trials compared to externally generated ones when presented alternately but not when presented in blocks. We conclude that the relative uncertainty in stimulation between trials determines serial dependency strengths.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11451828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble percepts of colored targets among distractors are influenced by hue similarity, not categorical identity. 分心者对彩色目标的集合感知受色调相似性的影响,而不是受类别同一性的影响。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.12
Lari S Virtanen, Toni P Saarela, Maria Olkkonen

Color can be used to group similar elements, and ensemble percepts of color can be formed for such groups. In real-life settings, however, elements of similar color are often spatially interspersed among other elements and seen against a background. Forming an ensemble percept of these elements would require the segmentation of the correct color signals for integration. Can the human visual system do this? We examined whether observers can extract the ensemble mean hue from a target hue distribution among distractors and whether a color category boundary between target and distractor hues facilitates ensemble hue formation. Observers were able to selectively judge the target ensemble mean hue, but the presence of distractor hues added noise to the ensemble estimates and caused perceptual biases. The more similar the distractor hues were to the target hues, the noisier the estimates became, possibly reflecting incomplete or inaccurate segmentation of the two hue ensembles. Asymmetries between nominally equidistant distractors and substantial individual variability, however, point to additional factors beyond simple mixing of target and distractor distributions. Finally, we found no evidence for categorical facilitation in selective ensemble hue formation.

色彩可用于将相似元素分组,并对这些分组形成色彩的集合感知。然而,在现实生活中,颜色相似的元素往往在空间上穿插在其他元素之间,并在背景中被看到。要对这些元素形成集合感知,就需要分割出正确的颜色信号进行整合。人类视觉系统能做到这一点吗?我们研究了观察者能否从目标色调分布中的干扰物中提取出集合平均色调,以及目标色调和干扰色调之间的颜色类别边界是否有助于集合色调的形成。观察者能够有选择性地判断目标集合平均色调,但分散色调的存在给集合估计增加了噪音,并导致知觉偏差。干扰色调与目标色调越相似,估计值的噪声就越大,这可能反映了两个色调集合的分割不完整或不准确。然而,名义上距离相等的干扰物之间的不对称性以及个体差异的显著性表明,除了目标物和干扰物分布的简单混合之外,还有其他因素。最后,我们没有发现分类促进选择性集合色调形成的证据。
{"title":"Ensemble percepts of colored targets among distractors are influenced by hue similarity, not categorical identity.","authors":"Lari S Virtanen, Toni P Saarela, Maria Olkkonen","doi":"10.1167/jov.24.11.12","DOIUrl":"10.1167/jov.24.11.12","url":null,"abstract":"<p><p>Color can be used to group similar elements, and ensemble percepts of color can be formed for such groups. In real-life settings, however, elements of similar color are often spatially interspersed among other elements and seen against a background. Forming an ensemble percept of these elements would require the segmentation of the correct color signals for integration. Can the human visual system do this? We examined whether observers can extract the ensemble mean hue from a target hue distribution among distractors and whether a color category boundary between target and distractor hues facilitates ensemble hue formation. Observers were able to selectively judge the target ensemble mean hue, but the presence of distractor hues added noise to the ensemble estimates and caused perceptual biases. The more similar the distractor hues were to the target hues, the noisier the estimates became, possibly reflecting incomplete or inaccurate segmentation of the two hue ensembles. Asymmetries between nominally equidistant distractors and substantial individual variability, however, point to additional factors beyond simple mixing of target and distractor distributions. Finally, we found no evidence for categorical facilitation in selective ensemble hue formation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The dichoptic contrast ordering test: A method for measuring the depth of binocular imbalance. 分色对比排序测试:一种测量双眼失衡深度的方法。
IF 2 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.2
Alex S Baldwin, Marie-Céline Lorenzini, Annabel Wing-Yan Fan, Robert F Hess, Alexandre Reynaud

In binocular vision, the relative strength of the input from the two eyes can have significant functional impact. These inputs are typically balanced; however, in some conditions (e.g., amblyopia), one eye will dominate over the other. To quantify imbalances in binocular vision, we have developed the Dichoptic Contrast Ordering Test (DiCOT). Implemented on a tablet device, the program uses rankings of perceived contrast (of dichoptically presented stimuli) to find a scaling factor that balances the two eyes. We measured how physical interventions (applied to one eye) affect the DiCOT measurements, including neutral density (ND) filters, Bangerter filters, and optical blur introduced by a +3-diopter (D) lens. The DiCOT results were compared to those from the Dichoptic Letter Test (DLT). Both the DiCOT and the DLT showed excellent test-retest reliability; however, the magnitude of the imbalances introduced by the interventions was greater in the DLT. To find consistency between the methods, rescaling the DiCOT results from individual conditions gave good results. However, the adjustments required for the +3-D lens condition were quite different from those for the ND and Bangerter filters. Our results indicate that the DiCOT and DLT measures partially separate aspects of binocular imbalance. This supports the simultaneous use of both measures in future studies.

在双眼视觉中,来自两只眼睛的输入的相对强度会对功能产生重大影响。这些输入通常是平衡的;然而,在某些情况下(如弱视),一只眼睛会比另一只眼睛占优势。为了量化双眼视觉的不平衡,我们开发了二分对比度排序测试(DiCOT)。该程序是在平板设备上实施的,它使用感知对比度(二色刺激物的对比度)的排序来找到平衡双眼视力的比例因子。我们测量了物理干预措施(应用于一只眼睛)对 DiCOT 测量结果的影响,包括中性密度 (ND) 滤光镜、Bangerter 滤光镜和+3-屈光度 (D) 镜片带来的光学模糊。DiCOT 的结果与二分法字母测试 (DLT) 的结果进行了比较。DiCOT 和 DLT 都显示出极佳的测试-重复测试可靠性;但是,DLT 的干预措施造成的不平衡程度更大。为了找到两种方法之间的一致性,对个别条件下的 DiCOT 结果进行重新缩放得到了很好的结果。然而,+3-D 镜片条件所需的调整与 ND 和 Bangerter 滤镜所需的调整大不相同。我们的结果表明,DiCOT 和 DLT 的测量方法部分地分离了双眼失衡的各个方面。这支持在未来的研究中同时使用这两种测量方法。
{"title":"The dichoptic contrast ordering test: A method for measuring the depth of binocular imbalance.","authors":"Alex S Baldwin, Marie-Céline Lorenzini, Annabel Wing-Yan Fan, Robert F Hess, Alexandre Reynaud","doi":"10.1167/jov.24.11.2","DOIUrl":"10.1167/jov.24.11.2","url":null,"abstract":"<p><p>In binocular vision, the relative strength of the input from the two eyes can have significant functional impact. These inputs are typically balanced; however, in some conditions (e.g., amblyopia), one eye will dominate over the other. To quantify imbalances in binocular vision, we have developed the Dichoptic Contrast Ordering Test (DiCOT). Implemented on a tablet device, the program uses rankings of perceived contrast (of dichoptically presented stimuli) to find a scaling factor that balances the two eyes. We measured how physical interventions (applied to one eye) affect the DiCOT measurements, including neutral density (ND) filters, Bangerter filters, and optical blur introduced by a +3-diopter (D) lens. The DiCOT results were compared to those from the Dichoptic Letter Test (DLT). Both the DiCOT and the DLT showed excellent test-retest reliability; however, the magnitude of the imbalances introduced by the interventions was greater in the DLT. To find consistency between the methods, rescaling the DiCOT results from individual conditions gave good results. However, the adjustments required for the +3-D lens condition were quite different from those for the ND and Bangerter filters. Our results indicate that the DiCOT and DLT measures partially separate aspects of binocular imbalance. This supports the simultaneous use of both measures in future studies.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1