首页 > 最新文献

Journal of Vision最新文献

英文 中文
Spatiotemporal letter processing in visual word recognition uncovered by perceptual oscillations. 感知振荡揭示的视觉词识别中的时空字母加工。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.1167/jov.25.14.8
Martin Arguin, Simon Fortier-St-Pierre

Despite decades of intense study, the spatiotemporal processing of letters in visual word recognition has yet to be elucidated, with the debate largely focusing on whether individual letters are processed serially or in parallel. The present study investigated the processing of individual letters and letter combinations through time in visual word recognition using displays where signal-to-noise ratio (SNR) varied randomly throughout a 200 ms exposure duration. In Experiment 1, SNR varied either homogeneously across all letters or independently for each letter position (cf. heterogeneous sampling). Reading accuracy was substantially greater with homogeneous than heterogeneous sampling. Experiment 2 again used heterogeneous sampling and classification images (CIs) were calculated for individual letter positions or conjunctions thereof, reflecting processing efficiency according to time during target exposure. These CIs or their Fourier transforms were passed to a classifier to assess differences in the result patterns across individual letter positions or their conjunctions. Overall, the present results indicate the following: (1) significant parallel letter processing capacity throughout exposure duration; (2) dissociable processing mechanisms for each letter position; and (3) letter position-specific mechanisms for letter conjunctions that are distinct from those for individual letters. The results also provide evidence relevant to the neural code underlying the perceptual mechanisms that were uncovered.

尽管经过数十年的深入研究,视觉单词识别中字母的时空处理尚未得到阐明,争论主要集中在单个字母是串行处理还是并行处理。本研究通过在200 ms的暴露时间内随机变化的信噪比(SNR),研究了视觉单词识别中单个字母和字母组合的处理随时间的变化。在实验1中,信噪比要么在所有字母上均匀变化,要么在每个字母位置上独立变化(参见异质采样)。均匀取样比非均匀取样的读数准确度高得多。实验2再次采用异质采样,计算单个字母位置或其连词的分类图像(CIs),反映目标曝光过程中按时间的处理效率。这些ci或它们的傅里叶变换被传递给分类器,以评估不同字母位置或它们的连词的结果模式的差异。总体而言,本研究结果表明:(1)在整个暴露时间内,平行字母处理能力显著;(2)每个字母位置的可解离加工机制;(3)与单个字母不同的字母连词的字母位置特定机制。研究结果也为揭示知觉机制背后的神经密码提供了相关证据。
{"title":"Spatiotemporal letter processing in visual word recognition uncovered by perceptual oscillations.","authors":"Martin Arguin, Simon Fortier-St-Pierre","doi":"10.1167/jov.25.14.8","DOIUrl":"10.1167/jov.25.14.8","url":null,"abstract":"<p><p>Despite decades of intense study, the spatiotemporal processing of letters in visual word recognition has yet to be elucidated, with the debate largely focusing on whether individual letters are processed serially or in parallel. The present study investigated the processing of individual letters and letter combinations through time in visual word recognition using displays where signal-to-noise ratio (SNR) varied randomly throughout a 200 ms exposure duration. In Experiment 1, SNR varied either homogeneously across all letters or independently for each letter position (cf. heterogeneous sampling). Reading accuracy was substantially greater with homogeneous than heterogeneous sampling. Experiment 2 again used heterogeneous sampling and classification images (CIs) were calculated for individual letter positions or conjunctions thereof, reflecting processing efficiency according to time during target exposure. These CIs or their Fourier transforms were passed to a classifier to assess differences in the result patterns across individual letter positions or their conjunctions. Overall, the present results indicate the following: (1) significant parallel letter processing capacity throughout exposure duration; (2) dissociable processing mechanisms for each letter position; and (3) letter position-specific mechanisms for letter conjunctions that are distinct from those for individual letters. The results also provide evidence relevant to the neural code underlying the perceptual mechanisms that were uncovered.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Divergent roles of visual structure and conceptual meaning in scene detection and categorization. 视觉结构和概念意义在场景检测和分类中的不同作用。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.1167/jov.25.14.21
Sage Aronson, Maria S Adkins, Michelle R Greene

Human observers can recognize the meaning of a complex visual scene in a fraction of a second, but not all scenes are equally easy to recognize at a glance. What governs this variability? We tested the hypothesis that scene understanding is modulated by two distinct forms of information: visual information, defined as the structural complexity of the image, and semantic information, defined as the richness of the conceptual content of the scene. We quantified visual information using image compressibility and quantified semantic information from the complexity of human-written scene descriptions. Across four behavioral experiments, participants performed either a rapid detection task (distinguishing intact scenes from phase-scrambled masks) or a basic-level categorization task. High visual information impaired both detection and categorization, consistent with a perceptual bottleneck. In contrast, high-semantic information facilitated detection but not categorization, suggesting that conceptual richness facilitates early perceptual processes without necessarily improving recognition. These findings reveal a dissociation between visual and semantic scene attributes and suggest that top-down expectations can selectively support early perceptual processing.

人类观察者可以在几分之一秒内识别出复杂视觉场景的含义,但并非所有场景都同样容易一眼识别。是什么支配着这种可变性?我们测试了场景理解由两种不同形式的信息调制的假设:视觉信息(定义为图像的结构复杂性)和语义信息(定义为场景概念内容的丰富性)。我们使用图像可压缩性量化视觉信息,并从人类书写场景描述的复杂性量化语义信息。在四个行为实验中,参与者要么执行快速检测任务(从相位混乱的掩模中区分完整的场景),要么执行基本级别的分类任务。高视觉信息损害了检测和分类,与感知瓶颈一致。相反,高语义信息促进了检测,但不促进分类,这表明概念丰富度促进了早期感知过程,但不一定提高识别。这些发现揭示了视觉和语义场景属性之间的分离,并表明自上而下的期望可以选择性地支持早期感知加工。
{"title":"Divergent roles of visual structure and conceptual meaning in scene detection and categorization.","authors":"Sage Aronson, Maria S Adkins, Michelle R Greene","doi":"10.1167/jov.25.14.21","DOIUrl":"10.1167/jov.25.14.21","url":null,"abstract":"<p><p>Human observers can recognize the meaning of a complex visual scene in a fraction of a second, but not all scenes are equally easy to recognize at a glance. What governs this variability? We tested the hypothesis that scene understanding is modulated by two distinct forms of information: visual information, defined as the structural complexity of the image, and semantic information, defined as the richness of the conceptual content of the scene. We quantified visual information using image compressibility and quantified semantic information from the complexity of human-written scene descriptions. Across four behavioral experiments, participants performed either a rapid detection task (distinguishing intact scenes from phase-scrambled masks) or a basic-level categorization task. High visual information impaired both detection and categorization, consistent with a perceptual bottleneck. In contrast, high-semantic information facilitated detection but not categorization, suggesting that conceptual richness facilitates early perceptual processes without necessarily improving recognition. These findings reveal a dissociation between visual and semantic scene attributes and suggest that top-down expectations can selectively support early perceptual processing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"21"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary extension during naturalistic viewing. 自然观赏时的边界延伸。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.1167/jov.25.14.17
Akseli Pullinen, Riikka Mononen, Jaana Simola, Linda Henriksson

Boundary extension refers to a phenomenon in which individuals are likely to remember an image as having more content beyond its actual borders, mistakenly adding details that might have been just beyond the original edges. Despite the wealth of research published about the topic over many decades, most research has used simple two-dimensional (2D) images as stimuli. Consequently, there is insufficient evidence that boundary extension as a phenomenon generalizes to real-world scenarios with naturalistic viewing behavior. To address this gap, we designed a virtual reality (VR) experiment during which the participants (N = 60) were freely able to visually explore naturalistic three-dimensional indoor environments surrounding them. In the experiment, each participant visited each of the 20 virtual rooms twice: first to view the scene and then to complete a task. Their task during the second visit was to move to the location from which they had originally viewed the scene, hence matching their view of the scene to what they remembered seeing before. Especially for close-up views, participants ended their task at a location where their field of view of the scene was wider compared to the initial view, hence indicating boundary extension. The effect was also greater when the movement direction was forward from a wider field of view than that of the original view. Both findings are consistent with previous research and demonstrate that boundary extension is not limited to looking at 2D images but can also occur during naturalistic viewing scenarios. As our method showed no visible boundaries in the stimuli, our results suggest that the existence of such boundaries is not critical for eliciting boundary extension.

边界扩展指的是一种现象,在这种现象中,人们可能会记住一张图片在其实际边界之外有更多的内容,错误地添加了可能刚刚超出原始边缘的细节。尽管几十年来发表了大量关于该主题的研究,但大多数研究都使用简单的二维(2D)图像作为刺激。因此,没有足够的证据表明边界扩展作为一种现象可以推广到具有自然观看行为的现实世界场景。为了解决这一差距,我们设计了一个虚拟现实(VR)实验,在这个实验中,参与者(N = 60)可以自由地在视觉上探索他们周围自然的三维室内环境。在实验中,每个参与者分别两次访问这20个虚拟房间:首先查看场景,然后完成一项任务。在第二次访问中,他们的任务是移动到他们最初观看场景的位置,从而将他们对场景的看法与他们之前看到的场景相匹配。特别是对于特写视图,参与者在他们的视野范围比初始视图更宽的位置结束任务,从而表明边界扩展。当移动方向从更宽的视野向前移动时,效果也比从原始视角移动时更大。这两项发现都与之前的研究一致,并表明边界扩展不仅限于观看2D图像,也可能发生在自然观看场景中。由于我们的方法在刺激中没有显示可见的边界,我们的结果表明,这种边界的存在对于引发边界扩展并不重要。
{"title":"Boundary extension during naturalistic viewing.","authors":"Akseli Pullinen, Riikka Mononen, Jaana Simola, Linda Henriksson","doi":"10.1167/jov.25.14.17","DOIUrl":"10.1167/jov.25.14.17","url":null,"abstract":"<p><p>Boundary extension refers to a phenomenon in which individuals are likely to remember an image as having more content beyond its actual borders, mistakenly adding details that might have been just beyond the original edges. Despite the wealth of research published about the topic over many decades, most research has used simple two-dimensional (2D) images as stimuli. Consequently, there is insufficient evidence that boundary extension as a phenomenon generalizes to real-world scenarios with naturalistic viewing behavior. To address this gap, we designed a virtual reality (VR) experiment during which the participants (N = 60) were freely able to visually explore naturalistic three-dimensional indoor environments surrounding them. In the experiment, each participant visited each of the 20 virtual rooms twice: first to view the scene and then to complete a task. Their task during the second visit was to move to the location from which they had originally viewed the scene, hence matching their view of the scene to what they remembered seeing before. Especially for close-up views, participants ended their task at a location where their field of view of the scene was wider compared to the initial view, hence indicating boundary extension. The effect was also greater when the movement direction was forward from a wider field of view than that of the original view. Both findings are consistent with previous research and demonstrate that boundary extension is not limited to looking at 2D images but can also occur during naturalistic viewing scenarios. As our method showed no visible boundaries in the stimuli, our results suggest that the existence of such boundaries is not critical for eliciting boundary extension.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716452/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal predictability of saccades modulates postsaccadic feature interference. 扫视的时空可预测性调节扫视后特征干扰。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.1167/jov.25.14.1
Tzu-Yao Chiu, Isabel Jaen, Julie D Golomb

Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, "swap errors" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades.

空间注意力和眼球运动共同有助于环境中视觉信息的有效采样,但由于视网膜的剧烈移动,在扫视过程中保持精确的空间注意力变得具有挑战性。先前的研究已经提供了证据,表明空间注意力可能在扫视过程中不完美地重新映射,在持续的感知中引起系统的特征推断,但扫视可预测性的作用仍未得到验证。在本研究中,我们研究了扫视的时空可预测性是否影响扫视后映射和特征感知。在两个预配实验中,我们实施了后扫视特征报告范式,并对扫视的时空可预测性进行了操作。实验1同时操纵空间和时间扫视可预测性,而实验2分离空间和时间可预测性在不同条件下的作用。除了空间和时间扫视可预测性都能提高一般任务表现外,我们还发现空间扫视可预测性特别调节了扫视后特征干扰。当扫视在空间上不可预测时,“交换错误”发生在扫视后的早期时间点,参与者错误地报告了视网膜位置的颜色而不是空间位置的目标颜色。然而,当扫视在空间上可预测时,交换错误减少了。这些结果表明,与扫视后重新映射相关的系统特征干扰可受对即将到来的扫视目标位置的预期的影响,突出了预测在维持跨扫视的感知稳定性中的作用。
{"title":"Spatiotemporal predictability of saccades modulates postsaccadic feature interference.","authors":"Tzu-Yao Chiu, Isabel Jaen, Julie D Golomb","doi":"10.1167/jov.25.14.1","DOIUrl":"10.1167/jov.25.14.1","url":null,"abstract":"<p><p>Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, \"swap errors\" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12697699/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sex differences in fixational eye movements following concussion. 脑震荡后眼球运动的性别差异。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.1167/jov.25.14.9
Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi

Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.

最近的研究支持视动障碍,视动障碍是一种客观的脑震荡生物标志物。视动障碍是指眼球在盯着一个固定目标时发生的微小运动。初步研究表明,脑震荡后的注视性扫视更大;然而,fem和注视性扫视的性别差异尚未得到研究。在这项研究中,我们使用基于视网膜图像的眼动追踪,并使用跟踪扫描激光检眼镜(TSLO),记录脑震荡青少年(n = 44,年龄范围,13-27岁)和年龄和性别匹配的健康对照(n = 44,年龄范围,13-27岁)注视TSLO成像光栅中心或角落时的FEMs。为了提高可靠性并克服手工标记的误差,采用了一种基于客观速度的算法来检测注视性扫视。脑震荡患者的注视性扫视比对照组大,但只在中心任务上。在不同的损伤组中,女性的注视扫视比男性大,而在角落任务中没有显著差异。在角落任务中,女性的水平扫视比男性少,而倾斜扫视比男性多。这些发现强调了在评估fem作为脑震荡的生物标志物时,控制任务和性别特异性差异的重要性。
{"title":"Sex differences in fixational eye movements following concussion.","authors":"Richard Johnston, Cameran Thompson, Anthony P Kontos, Min Zhang, Cyndi L Holland, Aaron J Zynda, Christy K Sheehy, Ethan A Rossi","doi":"10.1167/jov.25.14.9","DOIUrl":"10.1167/jov.25.14.9","url":null,"abstract":"<p><p>Recent research supports impairments in fixational eye movements (FEMs), small motions of the eye that occur during periods when gaze is maintained on a fixed target, as an objective biomarker of concussion. Preliminary work has demonstrated that fixational saccades are larger following a concussion; however, sex differences in FEMs and fixational saccades have not been examined. In this study, we used retinal image-based eye tracking, with a tracking scanning laser ophthalmoscope (TSLO), to record FEMs while adolescents with concussion (n = 44; age range, 13-27 years) and age- and sex-matched healthy controls (n = 44; age range, 13-27 years) fixated the center or corner of the TSLO imaging raster. To improve reliability and overcome errors associated with the manual labeling of FEMs, an objective velocity-based algorithm was used to detect fixational saccades. Concussion patients made larger fixational saccades than controls but only on the center task. Females made larger fixational saccades than males on this task irrespective of injury group, whereas no significant difference was supported for the corner task. Females also made fewer horizontal and more oblique fixational saccades than males on the corner task. These findings highlight the importance of controlling for task- and sex-specific differences when evaluating FEMs as a biomarker for concussion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of spatial attention on saccadic adaptation. 空间注意对跳眼适应的影响。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.1167/jov.25.14.13
Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson

Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.

眼球运动和空间注意力都是视觉感知的关键。将目光定向到感兴趣的对象是通过由内部目标驱动的自愿扫视(VSs)或由突然的环境变化自动触发的反应性扫视(RSs)来实现的。众所周知,在跳眼适应过程的驱动下,VSs和RSs在整个生命过程中都经历了可塑性调整,以保持其准确性。空间注意增强了有限区域内的视觉处理,它可以根据我们的内部目标(内源性)自动转移,也可以根据感官刺激的意外变化(外源性)自动转移。尽管跳跃性和注意力转移是由不同但高度相互关联的系统控制的这一观点被广泛接受,但跳跃性适应和空间注意力之间的关系仍不清楚。为了研究这种关系,我们进行了两个实验,结合了两步适应范式和注意导向范式的改进版本。实验1考察了外源注意力通过靠近或远离扫视目标的触觉线索转移对RS适应的影响。实验2同样使用触觉线索,但现在研究了内源性注意转移对VS适应的影响。虽然空间注意对跳动适应的影响尚无法获得直接证据,但相关分析表明,跳动适应的速率和强度与向目标的注意力分配呈正相关,与远离目标的注意力分配呈负相关。
{"title":"The effect of spatial attention on saccadic adaptation.","authors":"Ali Batikh, Éric Koun, Roméo Salemme, Alessandro Farnè, Denis Pélisson","doi":"10.1167/jov.25.14.13","DOIUrl":"10.1167/jov.25.14.13","url":null,"abstract":"<p><p>Eye movements and spatial attention are both crucial to visual perception. Orienting gaze to objects of interest is achieved by voluntary saccades (VSs) driven by internal goals or reactive saccades (RSs) triggered automatically by sudden environmental changes. Both VSs and RSs are known to undergo plastic adjustments to maintain their accuracy throughout life, driven by saccadic adaptation processes. Spatial attention enhances visual processing within a restricted zone, and it can be shifted voluntarily following our internal goals (endogenous) or automatically in response to unexpected changes in sensory stimulation (exogenous). Despite the widely accepted notion that saccadic and attention shifts are governed by distinct but highly interconnected systems, the relationship between saccadic adaptation and spatial attention is still unclear. To address this relationship, we conducted two experiments combining modified versions of the double-step adaptation paradigm and the attention-orienting paradigm. Experiment 1 tested the effect of shifting exogenous attention by a tactile cue near or away from the saccade's target on RS adaptation. Experiment 2 also used tactile cueing but now to investigate the effect of shifting endogenous attention on VS adaptation. Although we were unable to obtain direct evidence for an effect of spatial attention on saccadic adaptation, correlation analyses indicated that both the rate and magnitude of saccadic adaptation were positively correlated with the allocation of attention toward the saccade target and negatively correlated with attention directed away from the target.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention can shift the reference eye under binocular fusion failure: A case report. 双眼融合失败时,注意力可使参照眼移位1例。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.1167/jov.25.14.15
Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han

Binocular fusion normally relies on a "cyclopean eye" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.

双眼融合通常依赖于一只“独眼”,它将两只眼睛之间的图像差异整合成一个单一的连贯感知。当融合失败时,大脑如何选择其空间参照系仍不清楚。在此,我们报告一例罕见的44岁男性在小脑蚓部血管母细胞瘤手术切除后出现多方向复视的病例。临床检查显示几块眼外肌有缺陷。在实验中,双眼和双眼观看时,感知总是固定在左眼,右眼的图像错位,而单眼观看不会产生复视。至关重要的是,患者可以自主切换到右眼作为参考,这与刺激形状的相似性、刺激暴露顺序或参与者的反应需求无关。这个案例为理解自动感觉整合和自上而下的双目视觉控制之间的关系提供了一个独特的窗口:当独眼融合崩溃时,视觉感知适应单眼参考框架,可以灵活地受到注意力的影响。
{"title":"Attention can shift the reference eye under binocular fusion failure: A case report.","authors":"Jiahao Wu, Tengfei Han, Qian Wang, Lian Tang, Yumei Zhang, Zhanjun Zhang, Zaizhu Han","doi":"10.1167/jov.25.14.15","DOIUrl":"10.1167/jov.25.14.15","url":null,"abstract":"<p><p>Binocular fusion normally relies on a \"cyclopean eye\" that integrates image disparities between the two eyes into a single coherent percept. When fusion fails, how the brain chooses its spatial reference frame remains unclear. Here, we report a rare case of a 44-year-old man who developed multiple-directions diplopia following surgical resection of a cerebellar vermis hemangioblastoma. Clinical tests showed deficits in several extraocular muscles. Experimentally, in binocular and dichoptic viewing, perception was always anchored to the left eye with the right eye's image misaligned, whereas monocular viewing produced no diplopia. Crucially, the patient could voluntarily switch to the right eye as reference, which was independent of stimulus shape similarity, stimulus exposure order, or participant response demands. This case offers a unique window to understand the relationship between automatic sensory integration and top-down control in binocular vision: When cyclopean fusion breaks down, visual perception adapts to a single-eye reference frame that can be flexibly influenced by attention.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 14","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Allocentric spatial representations dominate when switching between real and virtual worlds. 在真实世界和虚拟世界之间切换时,非中心空间表示占主导地位。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-11-03 DOI: 10.1167/jov.25.13.7
Meaghan McManus, Franziska Seifert, Immo Schütz, Katja Fiehler

After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.

摘下虚拟现实耳机后,人们会惊讶地发现,他们面对的方向与预期的不同。在这里,我们调查了人们在沉浸在另一个环境中时是否能保持一个环境的空间表征。在前三个实验中,静止不动的参与者被要求在一个环境中(现实世界或虚拟环境)指出之前看到的目标,而在另一个环境中。我们改变了两个环境(可检测或不可检测)、虚拟环境本身(实验室或厨房)和指令(一般启动或以自我为中心启动)之间的偏差量。指向端点主要基于当前所见环境中物体的位置,表明强烈依赖于非中心线索。在第四个实验中,参与者在虚拟现实中移动,同时跟踪一个看不见的现实世界目标。我们确认,指向错误是由于依赖于当前看到的环境。人们似乎很少在以前看到的环境中跟踪物体的位置,而是主要依靠当前可用的空间信息来计划他们的行动。
{"title":"Allocentric spatial representations dominate when switching between real and virtual worlds.","authors":"Meaghan McManus, Franziska Seifert, Immo Schütz, Katja Fiehler","doi":"10.1167/jov.25.13.7","DOIUrl":"10.1167/jov.25.13.7","url":null,"abstract":"<p><p>After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12629136/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representational dynamics of the main dimensions of object space: Face/body selectivity aligns temporally with animal taxonomy but not with animacy. 物体空间主要维度的代表性动态:面部/身体选择性暂时与动物分类学一致,但与动物性不一致。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-11-03 DOI: 10.1167/jov.25.13.2
Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck

Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.

对象表征是根据多个维度来组织的,这对于区分有生命和无生命的对象以及对脸和身体的选择性具有重要作用。至于其他方面,问题仍然是它们与这两个主要方面的关系如何。其中一个维度是动物所属的分类水平的分级选择性。早期的研究表明,动物性可以被理解为动物分类的分级选择性,尽管最近的一项功能磁共振成像研究表明,分类效应是由于面部/身体的选择性。在这里,我们通过多变量脑电图(N = 25)研究了这些区分出现的时间分布,使用刺激集将分类从面部/身体选择性和从动物性中分离出来作为二元区分。我们的研究结果显示,分类和面部/身体选择性的时间分布非常相似,峰值在150 ms左右。二值动物区分具有更连续和延迟的时间轮廓。这些发现强化了动物分类的影响在很大程度上是由于面部/身体的选择性,而当与这些其他维度分离时,对动物和无生命物体的选择性被延迟。
{"title":"Representational dynamics of the main dimensions of object space: Face/body selectivity aligns temporally with animal taxonomy but not with animacy.","authors":"Gaëlle Leys, Chiu-Yueh Chen, Andreas von Leupoldt, J Brendan Ritchie, Hans Op de Beeck","doi":"10.1167/jov.25.13.2","DOIUrl":"10.1167/jov.25.13.2","url":null,"abstract":"<p><p>Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12598827/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast negation increases face pareidolia rates in natural scenes. 对比否定会增加在自然场景中面部视错觉的发生率。
IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Pub Date : 2025-11-03 DOI: 10.1167/jov.25.13.5
Benjamin Balas, Myra Morton, Molly Setchfield, Lily Roshau, Emily Westrick

Face pareidolia, the phenomenon of seeing face-like patterns in non-face images, has a dual nature: Pareidolic patterns are experienced as face-like, even while observers can recognize the true nature of the stimulus (Stuart et al., 2025). Although pareidolic faces seem to result largely from the canonical arrangement of eye spots and a mouth, we hypothesized that competition between veridical and face-like interpretations of pareidolic patterns may constrain face pareidolia in natural scenes and textures. Specifically, we predicted that contrast negation, which disrupts multiple aspects of mid- to high-level recognition, may increase rates of face pareidolia in complex natural textures by weakening the veridical, non-face stimulus interpretation. We presented adult participants (n = 27) and 5- to 12-year-old children (n = 67) with a series of natural images depicting textures such as grass, leaves, shells, and rocks. We asked participants to circle any patterns in each image that looked face-like, with no constraints on response time or pattern size, position, and orientation. We found that, across our adult and child samples, contrast-negated images yielded more pareidolic face detections than positive images. We conclude that disrupting veridical object and texture recognition enhances pareidolia in children and adults by compromising half of the dual nature of a pareidolic pattern.

面部幻想性视错觉,即在非人脸图像中看到类似人脸的图案的现象,具有双重性质:即使观察者可以识别刺激的真实性质,但幻想性模式也被体验为类似人脸的模式(Stuart et al., 2025)。尽管空想面孔似乎主要是由眼斑和嘴巴的规范排列造成的,但我们假设,对空想模式的真实解释和面部解释之间的竞争可能会限制自然场景和纹理中的面部空想。具体来说,我们预测对比否定会破坏中高级识别的多个方面,通过削弱真实的、非面部刺激的解释,可能会增加复杂自然纹理中面部空想性视错觉的发生率。我们向成人参与者(n = 27)和5- 12岁的儿童(n = 67)展示了一系列描绘草、树叶、贝壳和岩石等纹理的自然图像。我们要求参与者圈出每张图片中任何看起来像脸的图案,没有反应时间、图案大小、位置和方向的限制。我们发现,在我们的成人和儿童样本中,对比阴性图像比阳性图像产生更多的空想性面部检测。我们得出的结论是,干扰物体和纹理识别会增强儿童和成人的幻想性视错觉,因为它损害了幻想性视错觉模式的一半双重性质。
{"title":"Contrast negation increases face pareidolia rates in natural scenes.","authors":"Benjamin Balas, Myra Morton, Molly Setchfield, Lily Roshau, Emily Westrick","doi":"10.1167/jov.25.13.5","DOIUrl":"10.1167/jov.25.13.5","url":null,"abstract":"<p><p>Face pareidolia, the phenomenon of seeing face-like patterns in non-face images, has a dual nature: Pareidolic patterns are experienced as face-like, even while observers can recognize the true nature of the stimulus (Stuart et al., 2025). Although pareidolic faces seem to result largely from the canonical arrangement of eye spots and a mouth, we hypothesized that competition between veridical and face-like interpretations of pareidolic patterns may constrain face pareidolia in natural scenes and textures. Specifically, we predicted that contrast negation, which disrupts multiple aspects of mid- to high-level recognition, may increase rates of face pareidolia in complex natural textures by weakening the veridical, non-face stimulus interpretation. We presented adult participants (n = 27) and 5- to 12-year-old children (n = 67) with a series of natural images depicting textures such as grass, leaves, shells, and rocks. We asked participants to circle any patterns in each image that looked face-like, with no constraints on response time or pattern size, position, and orientation. We found that, across our adult and child samples, contrast-negated images yielded more pareidolic face detections than positive images. We conclude that disrupting veridical object and texture recognition enhances pareidolia in children and adults by compromising half of the dual nature of a pareidolic pattern.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 13","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12617666/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1