首页 > 最新文献

Attention Perception & Psychophysics最新文献

英文 中文
Multiple visual items can be simultaneously compared with target templates in memory 多个视觉项目可同时与记忆中的目标模板进行比较。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-06-05 DOI: 10.3758/s13414-024-02906-6
Yujie Zheng, Jiafei Lou, Yunrong Lu, Zhi Li

When we search for something, we often rely on both what we see and what we remember. This process can be divided into three stages: selecting items, identifying those items, and comparing them with what we are trying to find in our memory. It has been suggested that we select items one by one, and we can identify several items at once. In the present study, we tested whether we need to finish comparing a selected item in the visual display with one or more target templates in memory before we can move on to the next selected item. In Experiment 1, observers looked for either one or two target types in a rapid serially presented stimuli stream. The time interval between the presentation onset of successive items in the stream was varied to get a threshold. For search for one target, the threshold was 89 ms. When look for either of two targets, it was 192 ms. This threshold difference offered a baseline. In Experiment 2, observers looked for one or two types of target in a search array. If they compared each identified item separately, we should expect a jump in the slope of the RT × Set Size function, on the order of the baseline obtained in Experiment 1. However, the slope difference was only 13 ms/item, suggesting that several identified items can be compared at once with target templates in memory. Experiment 3 showed that this slope difference was not just a memory-load cost.

当我们寻找某样东西时,我们通常会依靠我们所看到的和我们所记得的。这个过程可以分为三个阶段:选择物品、识别这些物品以及将它们与我们试图在记忆中找到的东西进行比较。有人认为,我们会一个一个地选择物品,也可以同时识别多个物品。在本研究中,我们测试了我们是否需要完成将视觉显示中的一个选定项目与记忆中的一个或多个目标模板进行比较之后才能继续下一个选定项目。在实验 1 中,观察者在快速连续呈现的刺激流中寻找一个或两个目标类型。为了得到一个阈值,我们改变了刺激流中连续项目开始呈现的时间间隔。寻找一个目标时,阈值为 89 毫秒。当寻找两个目标中的任何一个时,阈值为 192 毫秒。这一阈值差异提供了一个基线。在实验 2 中,观察者在搜索阵列中寻找一种或两种目标。如果他们分别比较每一个识别出的项目,我们就可以预期,RT × 集合大小函数的斜率会出现跳跃,与实验 1 中获得的基线大致相同。然而,斜率差异仅为 13 毫秒/项,这表明记忆中的目标模板可以同时比较多个识别出的项目。实验 3 表明,这种斜率差异不仅仅是记忆负荷成本。
{"title":"Multiple visual items can be simultaneously compared with target templates in memory","authors":"Yujie Zheng,&nbsp;Jiafei Lou,&nbsp;Yunrong Lu,&nbsp;Zhi Li","doi":"10.3758/s13414-024-02906-6","DOIUrl":"10.3758/s13414-024-02906-6","url":null,"abstract":"<div><p>When we search for something, we often rely on both what we see and what we remember. This process can be divided into three stages: selecting items, identifying those items, and comparing them with what we are trying to find in our memory. It has been suggested that we select items one by one, and we can identify several items at once. In the present study, we tested whether we need to finish comparing a selected item in the visual display with one or more target templates in memory before we can move on to the next selected item. In Experiment 1, observers looked for either one or two target types in a rapid serially presented stimuli stream. The time interval between the presentation onset of successive items in the stream was varied to get a threshold. For search for one target, the threshold was 89 ms. When look for either of two targets, it was 192 ms. This threshold difference offered a baseline. In Experiment 2, observers looked for one or two types of target in a search array. If they compared each identified item separately, we should expect a jump in the slope of the RT × Set Size function, on the order of the baseline obtained in Experiment 1. However, the slope difference was only 13 ms/item, suggesting that several identified items can be compared at once with target templates in memory. Experiment 3 showed that this slope difference was not just a memory-load cost.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1641 - 1652"},"PeriodicalIF":1.7,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Peripheral vision contributes to implicit attentional learning: Findings from the “mouse-eye” paradigm 周边视觉有助于内隐注意学习:鼠眼 "范式的发现
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-06-05 DOI: 10.3758/s13414-024-02907-5
Chen Chen, Vanessa G. Lee

The central visual field is essential for activities like reading and face recognition. However, the impact of peripheral vision loss on daily activities is profound. While the importance of central vision is well established, the contribution of peripheral vision to spatial attention is less clear. In this study, we introduced a “mouse-eye” method as an alternative to traditional gaze-contingent eye tracking. We found that even in tasks requiring central vision, peripheral vision contributes to implicit attentional learning. Participants searched for a T among Ls, with the T appearing more often in one visual quadrant. Earlier studies showed that participants’ awareness of the T location probability was not essential for their ability to learn. When we limited the visible area around the mouse cursor, only participants aware of the target’s location probability showed learning; those unaware did not. Adding placeholders in the periphery did not restore implicit attentional learning. A control experiment showed that when participants were allowed to see all items while searching and moving the mouse to reveal the target’s color, both aware and unaware participants acquired location probability learning. Our results underscore the importance of peripheral vision in implicitly guided attention. Without peripheral vision, only explicit, but not implicit, attentional learning prevails.

中心视野对于阅读和人脸识别等活动至关重要。然而,周边视觉丧失对日常活动的影响是深远的。虽然中心视觉的重要性已得到公认,但周边视觉对空间注意力的贡献却不太清楚。在这项研究中,我们引入了一种 "鼠标-眼睛 "方法,作为传统凝视眼动跟踪的替代方法。我们发现,即使是在需要中心视觉的任务中,周边视觉也有助于内隐注意学习。参与者在L中寻找T,而T更多出现在一个视觉象限中。早期的研究表明,参与者对 T 位置概率的意识对他们的学习能力并不重要。当我们限制鼠标光标周围的可见区域时,只有意识到目标位置概率的参与者才表现出学习能力,而没有意识到的参与者则没有表现出学习能力。在外围添加占位符并不能恢复内隐注意学习。一项对照实验表明,如果允许参与者在搜索和移动鼠标以显示目标颜色时看到所有项目,那么意识到和未意识到的参与者都能获得位置概率学习。我们的研究结果强调了外围视觉在内隐性注意力引导中的重要性。如果没有外围视觉,则只有显性而非隐性的注意学习才会占上风。
{"title":"Peripheral vision contributes to implicit attentional learning: Findings from the “mouse-eye” paradigm","authors":"Chen Chen,&nbsp;Vanessa G. Lee","doi":"10.3758/s13414-024-02907-5","DOIUrl":"10.3758/s13414-024-02907-5","url":null,"abstract":"<div><p>The central visual field is essential for activities like reading and face recognition. However, the impact of peripheral vision loss on daily activities is profound. While the importance of central vision is well established, the contribution of peripheral vision to spatial attention is less clear. In this study, we introduced a “mouse-eye” method as an alternative to traditional gaze-contingent eye tracking. We found that even in tasks requiring central vision, peripheral vision contributes to implicit attentional learning. Participants searched for a <i>T</i> among <i>L</i>s, with the <i>T</i> appearing more often in one visual quadrant. Earlier studies showed that participants’ awareness of the <i>T</i> location probability was not essential for their ability to learn. When we limited the visible area around the mouse cursor, only participants aware of the target’s location probability showed learning; those unaware did not. Adding placeholders in the periphery did not restore implicit attentional learning. A control experiment showed that when participants were allowed to see all items while searching and moving the mouse to reveal the target’s color, both aware and unaware participants acquired location probability learning. Our results underscore the importance of peripheral vision in implicitly guided attention. Without peripheral vision, only explicit, but not implicit, attentional learning prevails.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1621 - 1640"},"PeriodicalIF":1.7,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The differential impact of face distractors on visual working memory across encoding and delay stages 在不同的编码和延迟阶段,面孔分心物对视觉工作记忆的影响各不相同。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-31 DOI: 10.3758/s13414-024-02895-6
Chaoxiong Ye, Qianru Xu, Zhihu Pan, Qi-Yang Nie, Qiang Liu

External distractions often occur when information must be retained in visual working memory (VWM)—a crucial element in cognitive processing and everyday activities. However, the distraction effects can differ if they occur during the encoding rather than the delay stages. Previous research on these effects used simple stimuli (e.g., color and orientation) rather than considering distractions caused by real-world stimuli on VWM. In the present study, participants performed a facial VWM task under different distraction conditions across the encoding and delay stages to elucidate the mechanisms of distraction resistance in the context of complex real-world stimuli. VWM performance was significantly impaired by delay-stage but not encoding-stage distractors (Experiment 1). In addition, the delay distraction effect arose primarily due to the absence of distractor process at the encoding stage rather than the presence of a distractor during the delay stage (Experiment 2). Finally, the impairment in the delay-distraction condition was not due to the abrupt appearance of distractors (Experiment 3). Taken together, these findings indicate that the processing mechanisms previously established for resisting distractions in VWM using simple stimuli can be extended to more complex real-world stimuli, such as faces.

当信息必须保留在视觉工作记忆(VWM)--认知处理和日常活动中的一个关键因素--时,外部干扰就会经常出现。然而,如果分心发生在编码阶段而非延迟阶段,其效果就会有所不同。以前对这些效应的研究使用的是简单刺激(如颜色和方向),而没有考虑真实世界的刺激对视觉工作记忆造成的干扰。在本研究中,受试者在编码和延迟阶段的不同分心条件下完成了一项面部 VWM 任务,以阐明在复杂的真实世界刺激背景下的抗分心机制。延迟阶段而非编码阶段的分心物会明显影响VWM成绩(实验1)。此外,延迟分心效应产生的主要原因是编码阶段没有分心过程,而不是延迟阶段存在分心(实验 2)。最后,延迟分心条件下的障碍并不是由于突然出现的分心物造成的(实验 3)。综上所述,这些研究结果表明,之前在大众汽车记忆中使用简单刺激建立的抵制分心的处理机制可以扩展到更复杂的现实世界刺激,如人脸。
{"title":"The differential impact of face distractors on visual working memory across encoding and delay stages","authors":"Chaoxiong Ye,&nbsp;Qianru Xu,&nbsp;Zhihu Pan,&nbsp;Qi-Yang Nie,&nbsp;Qiang Liu","doi":"10.3758/s13414-024-02895-6","DOIUrl":"10.3758/s13414-024-02895-6","url":null,"abstract":"<div><p>External distractions often occur when information must be retained in visual working memory (VWM)—a crucial element in cognitive processing and everyday activities. However, the distraction effects can differ if they occur during the encoding rather than the delay stages. Previous research on these effects used simple stimuli (e.g., color and orientation) rather than considering distractions caused by real-world stimuli on VWM. In the present study, participants performed a facial VWM task under different distraction conditions across the encoding and delay stages to elucidate the mechanisms of distraction resistance in the context of complex real-world stimuli. VWM performance was significantly impaired by delay-stage but not encoding-stage distractors (Experiment 1). In addition, the delay distraction effect arose primarily due to the absence of distractor process at the encoding stage rather than the presence of a distractor during the delay stage (Experiment 2). Finally, the impairment in the delay-distraction condition was not due to the abrupt appearance of distractors (Experiment 3). Taken together, these findings indicate that the processing mechanisms previously established for resisting distractions in VWM using simple stimuli can be extended to more complex real-world stimuli, such as faces.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 6","pages":"2029 - 2041"},"PeriodicalIF":1.7,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410854/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141185066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual differences in the use of top-down versus bottom-up cues to resolve phonetic ambiguity 使用自上而下与自下而上的线索解决语音歧义的个体差异。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-29 DOI: 10.3758/s13414-024-02889-4
Anne Marie Crinnion, Christopher C. Heffner, Emily B. Myers

How listeners weight a wide variety of information to interpret ambiguities in the speech signal is a question of interest in speech perception, particularly when understanding how listeners process speech in the context of phrases or sentences. Dominant views of cue use for language comprehension posit that listeners integrate multiple sources of information to interpret ambiguities in the speech signal. Here, we study how semantic context, sentence rate, and vowel length all influence identification of word-final stops. We find that while at the group level all sources of information appear to influence how listeners interpret ambiguities in speech, at the level of the individual listener, we observe systematic differences in cue reliance, such that some individual listeners favor certain cues (e.g., speech rate and vowel length) to the exclusion of others (e.g., semantic context). While listeners exhibit a range of cue preferences, across participants we find a negative relationship between individuals’ weighting of semantic and acoustic-phonetic (sentence rate, vowel length) cues. Additionally, we find that these weightings are stable within individuals over a period of 1 month. Taken as a whole, these findings suggest that theories of cue integration and speech processing may fail to capture the rich individual differences that exist between listeners, which could arise due to mechanistic differences between individuals in speech perception.

听者如何权衡各种信息以解释语音信号中的歧义,是语音感知中一个令人感兴趣的问题,尤其是在理解听者如何处理短语或句子语境中的语音时。语言理解线索使用的主流观点认为,听者会整合多种信息来源来解释语音信号中的歧义。在此,我们研究了语义语境、句子速度和元音长度如何影响词尾停顿的识别。我们发现,虽然在群体水平上,所有信息源似乎都会影响听者如何解释语音中的歧义,但在听者个体水平上,我们观察到了线索依赖性的系统性差异,例如有些听者个体偏好某些线索(如语速和元音长度),而排斥其他线索(如语义上下文)。虽然听者表现出一系列的线索偏好,但在所有参与者中,我们发现个人对语义和声学-语音(句子速率、元音长度)线索的加权之间存在负相关关系。此外,我们还发现这些权重在个体内部一个月内保持稳定。总而言之,这些研究结果表明,线索整合和语音处理理论可能无法捕捉到听者之间存在的丰富的个体差异,而这些差异可能是由于个体之间在语音感知方面的机制差异造成的。
{"title":"Individual differences in the use of top-down versus bottom-up cues to resolve phonetic ambiguity","authors":"Anne Marie Crinnion,&nbsp;Christopher C. Heffner,&nbsp;Emily B. Myers","doi":"10.3758/s13414-024-02889-4","DOIUrl":"10.3758/s13414-024-02889-4","url":null,"abstract":"<div><p>How listeners weight a wide variety of information to interpret ambiguities in the speech signal is a question of interest in speech perception, particularly when understanding how listeners process speech in the context of phrases or sentences. Dominant views of cue use for language comprehension posit that listeners integrate multiple sources of information to interpret ambiguities in the speech signal. Here, we study how semantic context, sentence rate, and vowel length all influence identification of word-final stops. We find that while at the group level all sources of information appear to influence how listeners interpret ambiguities in speech, at the level of the individual listener, we observe systematic differences in cue reliance, such that some individual listeners favor certain cues (e.g., speech rate and vowel length) to the exclusion of others (e.g., semantic context). While listeners exhibit a range of cue preferences, across participants we find a negative relationship between individuals’ weighting of semantic and acoustic-phonetic (sentence rate, vowel length) cues. Additionally, we find that these weightings are stable within individuals over a period of 1 month. Taken as a whole, these findings suggest that theories of cue integration and speech processing may fail to capture the rich individual differences that exist between listeners, which could arise due to mechanistic differences between individuals in speech perception.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1724 - 1734"},"PeriodicalIF":1.7,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141177109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Isolating the impact of a visual search template’s color and form information on search guidance and verification times 隔离视觉搜索模板的颜色和形式信息对搜索引导和验证时间的影响。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-29 DOI: 10.3758/s13414-024-02899-2
Derrek T. Montalvo, Andrew Rodriguez, Mark W. Becker

Visual search can be guided by biasing one’s attention towards features associated with a target. Prior work has shown that high-fidelity, picture-based cues are more beneficial to search than text-based cues. However, typically picture cues provide both detailed form information and color information that is absent from text-based cues. Given that visual resolution deteriorates with eccentricity, it is not clear that high-fidelity form information would benefit guidance to peripheral objects – much of the picture benefit could be due to color information alone. To address this, we conducted a search task with eye-tracking that had four types of cues that comprised a 2 (text/pictorial cue) × 2 (no color/color) design. We hypothesized that color information would be important for efficient search guidance while high-fidelity form information would be important for efficient verification times. In Experiment 1 cues were a colored picture of the target, a gray-scaled picture of the target, a text-based cue that included color (e.g., “blue shoe”), or a text-based cue without color (e.g., “shoe”). Experiment 2 was a replication of Experiment 1, except that the color word in the text-based cue was presented in the precise color that was the dominant color in the target. Our results show that high-fidelity form information is important for efficient verifications times (with color playing less of a role) and color is important for efficient guidance, but form information also benefits guidance. These results suggest that different features of the cue independently contribute to different aspects of the search process.

视觉搜索可以引导人们的注意力偏向与目标相关的特征。先前的研究表明,高保真的图片提示比文字提示更有利于搜索。然而,图片线索通常既能提供详细的形状信息,又能提供文字线索所不具备的颜色信息。考虑到视觉分辨率会随着偏心率的增加而降低,高保真的形式信息是否会有利于对周边物体的引导并不明确--图片的大部分益处可能仅仅来自于颜色信息。为了解决这个问题,我们利用眼动跟踪技术进行了一项搜索任务,该任务有四种类型的线索,包括 2(文字/图片线索)×2(无颜色/颜色)设计。我们的假设是,颜色信息对于高效搜索引导非常重要,而高保真形式信息对于高效验证时间非常重要。实验 1 中的线索包括目标的彩色图片、目标的灰度图片、包含颜色的文字线索(如 "蓝色鞋子")或不包含颜色的文字线索(如 "鞋子")。实验 2 是实验 1 的翻版,不同之处在于文字提示中的颜色词是以目标中的主要颜色精确呈现的。我们的结果表明,高保真的形式信息对高效验证时间很重要(颜色的作用较小),颜色对高效引导很重要,但形式信息也有利于引导。这些结果表明,线索的不同特征对搜索过程的不同方面有独立的贡献。
{"title":"Isolating the impact of a visual search template’s color and form information on search guidance and verification times","authors":"Derrek T. Montalvo,&nbsp;Andrew Rodriguez,&nbsp;Mark W. Becker","doi":"10.3758/s13414-024-02899-2","DOIUrl":"10.3758/s13414-024-02899-2","url":null,"abstract":"<div><p>Visual search can be guided by biasing one’s attention towards features associated with a target. Prior work has shown that high-fidelity, picture-based cues are more beneficial to search than text-based cues. However, typically picture cues provide both detailed form information and color information that is absent from text-based cues. Given that visual resolution deteriorates with eccentricity, it is not clear that high-fidelity form information would benefit guidance to peripheral objects – much of the picture benefit could be due to color information alone. To address this, we conducted a search task with eye-tracking that had four types of cues that comprised a 2 (text/pictorial cue) × 2 (no color/color) design. We hypothesized that color information would be important for efficient search guidance while high-fidelity form information would be important for efficient verification times. In Experiment 1 cues were a colored picture of the target, a gray-scaled picture of the target, a text-based cue that included color (e.g., “blue shoe”), or a text-based cue without color (e.g., “shoe”). Experiment 2 was a replication of Experiment 1, except that the color word in the text-based cue was presented in the precise color that was the dominant color in the target. Our results show that high-fidelity form information is important for efficient verifications times (with color playing less of a role) and color is important for efficient guidance, but form information also benefits guidance. These results suggest that different features of the cue independently contribute to different aspects of the search process.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 7","pages":"2275 - 2288"},"PeriodicalIF":1.7,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02899-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141177112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian analysis on missing visual information and object complexity on visual search for object orientation and object identity 贝叶斯分析缺失的视觉信息和物体复杂性对物体方位和物体特征视觉搜索的影响。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-22 DOI: 10.3758/s13414-024-02901-x
Rachel T. T. Nguyen, Matthew S. Peterson

Missing visual information, such as a gap between an object or an occluded view, has been shown to disrupt visual search and make amodal completion inefficient. Previous research, using simple black bars as stimuli, failed to show a pop-out effect (flat search slope across increasing visual set sizes) during a feature search when the target was partially occluded, but not in cases where it was fully visible. We wanted to see if this lack of a pop-out effect during feature (orientation) search extended to complex objects (Experiment 1) and identity search (Experiment 2). Participants completed orientation and identity visual search tasks by deciding whether the target was present or not present. Bayesian analyses was conducted to find evidence for observed data to be under the null (pop-out effects) or alternative hypotheses (differences in search slopes). When no occluders or gaps were present, a pop-out effect occurred when searching for a simple objects' orientation or identity. In addition, object complexity affected identity search, with anecdotal evidence suggesting that some complex object may not show a pop-out effect. Furthermore, white occluding bars were more disruptive than having a gap of visual information for feature search but not for identity search. Overall, pop-out effects do occur for simple objects, but when the task is more difficult, search for real-world objects is greatly affected by any type of visual disruption.

缺失的视觉信息,如物体之间的间隙或被遮挡的视图,已被证明会扰乱视觉搜索并使模态完成效率低下。之前的研究使用简单的黑条作为刺激物,结果表明,当目标部分被遮挡时,在特征搜索过程中没有显示出 "弹出效应"(在视觉集大小增加时搜索斜率持平),而当目标完全可见时则没有。我们想看看这种在特征(方位)搜索中缺乏弹出效应的现象是否会延伸到复杂物体(实验 1)和身份搜索(实验 2)中。参与者通过判断目标是否存在来完成方位和身份视觉搜索任务。贝叶斯分析为观察到的数据在零假设(弹出效应)或替代假设(搜索斜率的差异)下寻找证据。当没有遮挡物或间隙存在时,在搜索简单物体的方向或身份时会出现跳出效应。此外,物体的复杂性也会影响身份搜索,有传闻称某些复杂物体可能不会出现弹出效应。此外,在特征搜索时,白色遮挡条比视觉信息空白更具有干扰性,但在身份搜索时则不然。总的来说,简单物体确实会出现跳出效应,但当任务难度增加时,任何类型的视觉干扰都会极大地影响对真实世界物体的搜索。
{"title":"Bayesian analysis on missing visual information and object complexity on visual search for object orientation and object identity","authors":"Rachel T. T. Nguyen,&nbsp;Matthew S. Peterson","doi":"10.3758/s13414-024-02901-x","DOIUrl":"10.3758/s13414-024-02901-x","url":null,"abstract":"<div><p>Missing visual information, such as a gap between an object or an occluded view, has been shown to disrupt visual search and make amodal completion inefficient. Previous research, using simple black bars as stimuli, failed to show a pop-out effect (flat search slope across increasing visual set sizes) during a feature search when the target was partially occluded, but not in cases where it was fully visible. We wanted to see if this lack of a pop-out effect during feature (orientation) search extended to complex objects (Experiment 1) and identity search (Experiment 2). Participants completed orientation and identity visual search tasks by deciding whether the target was present or not present. Bayesian analyses was conducted to find evidence for observed data to be under the null (pop-out effects) or alternative hypotheses (differences in search slopes). When no occluders or gaps were present, a pop-out effect occurred when searching for a simple objects' orientation or identity. In addition, object complexity affected identity search, with anecdotal evidence suggesting that some complex object may not show a pop-out effect. Furthermore, white occluding bars were more disruptive than having a gap of visual information for feature search but not for identity search. Overall, pop-out effects do occur for simple objects, but when the task is more difficult, search for real-world objects is greatly affected by any type of visual disruption.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1560 - 1573"},"PeriodicalIF":1.7,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02901-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing rhythmic temporal expectations: The dominance of auditory modality under spatial uncertainty 增强节奏的时间预期:空间不确定性下听觉模式的主导地位。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-22 DOI: 10.3758/s13414-024-02898-3
Lucie Attout, Mariagrazia Capizzi, Pom Charras

To effectively process the most relevant information, the brain anticipates the optimal timing for allocating attentional resources. Behavior can be optimized by automatically aligning attention with external rhythmic structures, whether visual or auditory. Although the auditory modality is known for its efficacy in representing temporal information, the current body of research has not conclusively determined whether visual or auditory rhythmic presentations have a definitive advantage in entraining temporal attention. The present study directly examined the effects of auditory and visual rhythmic cues on the discrimination of visual targets in Experiment 1 and on auditory targets in Experiment 2. Additionally, the role of endogenous spatial attention was also considered. When and where the target was the most likely to occur were cued by unimodal (visual or auditory) and bimodal (audiovisual) signals. A sequence of salient events was employed to elicit rhythm-based temporal expectations and a symbolic predictive cue served to orient spatial attention. The results suggest a superiority of auditory over visual rhythms, irrespective of spatial attention, whether the spatial cue and rhythm converge or not (unimodal or bimodal), and regardless of the target modality (visual or auditory). These findings are discussed in terms of a modality-specific rhythmic orienting, while considering a single, supramodal system operating in a top-down manner for endogenous spatial attention.

为了有效处理最相关的信息,大脑会预测分配注意力资源的最佳时机。通过自动将注意力与外部节奏结构(无论是视觉还是听觉)保持一致,可以优化行为。虽然听觉模式在表现时间信息方面的功效众所周知,但目前的研究还没有最终确定视觉或听觉节奏演示在诱导时间注意力方面是否具有绝对优势。本研究在实验 1 中直接考察了听觉和视觉节奏线索对视觉目标辨别的影响,在实验 2 中直接考察了听觉节奏线索对视觉目标辨别的影响。此外,研究还考虑了内源性空间注意的作用。目标最有可能出现的时间和地点是由单模态(视觉或听觉)和双模态(视听)信号提示的。突出事件的序列被用来激发基于节奏的时间预期,而象征性的预测提示则用来引导空间注意力。结果表明,无论空间注意力如何,无论空间线索和节奏是否趋同(单模态或双模态),也无论目标模态(视觉或听觉)如何,听觉节奏都优于视觉节奏。这些研究结果从特定模式的节奏定向角度进行了讨论,同时考虑了单一的超模式系统以自上而下的方式运作的内源性空间注意。
{"title":"Enhancing rhythmic temporal expectations: The dominance of auditory modality under spatial uncertainty","authors":"Lucie Attout,&nbsp;Mariagrazia Capizzi,&nbsp;Pom Charras","doi":"10.3758/s13414-024-02898-3","DOIUrl":"10.3758/s13414-024-02898-3","url":null,"abstract":"<div><p>To effectively process the most relevant information, the brain anticipates the optimal timing for allocating attentional resources. Behavior can be optimized by automatically aligning attention with external rhythmic structures, whether visual or auditory. Although the auditory modality is known for its efficacy in representing temporal information, the current body of research has not conclusively determined whether visual or auditory rhythmic presentations have a definitive advantage in entraining temporal attention. The present study directly examined the effects of auditory and visual rhythmic cues on the discrimination of visual targets in Experiment 1 and on auditory targets in Experiment 2. Additionally, the role of endogenous spatial attention was also considered. When and where the target was the most likely to occur were cued by unimodal (visual or auditory) and bimodal (audiovisual) signals. A sequence of salient events was employed to elicit rhythm-based temporal expectations and a symbolic predictive cue served to orient spatial attention. The results suggest a superiority of auditory over visual rhythms, irrespective of spatial attention, whether the spatial cue and rhythm converge or not (unimodal or bimodal), and regardless of the target modality (visual or auditory). These findings are discussed in terms of a modality-specific rhythmic orienting, while considering a single, supramodal system operating in a top-down manner for endogenous spatial attention.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1681 - 1693"},"PeriodicalIF":1.7,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intermixed levels of visual search difficulty produce asymmetric probability learning 视觉搜索难度的混合水平会产生不对称的概率学习。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-20 DOI: 10.3758/s13414-024-02897-4
Bo-Yeong Won, Andrew B. Leber

When performing novel tasks, we often apply the rules we have learned from previous, similar tasks. Knowing when to generalize previous knowledge, however, is a complex challenge. In this study, we investigated the properties of learning generalization in a visual search task, focusing on the role of search difficulty. We used a spatial probability learning paradigm in which individuals learn to prioritize their search toward the locations where a target appears more often (i.e., high-probable location) than others (i.e., low-probable location) in a search display. In the first experiment, during a training phase, we intermixed the easy and difficult search trials within blocks, and each was respectively paired with a distinct high-probable location. Then, during a testing phase, we removed the probability manipulation and assessed any generalization of spatial biases to a novel, intermediate difficulty task. Results showed that, as training progressed, the easy search evoked a stronger spatial bias to its high-probable location than the difficult search. Moreover, there was greater generalization of the easy search learning than difficult search learning at test, revealed by a stronger bias toward the former’s high-probable location. Two additional experiments ruled out alternatives that learning during difficult search itself is weak and learning during easy search specifically weakens learning of the difficult search. Overall, the results demonstrate that easy search interferes with difficult search learning and generalizability when the two levels of search difficulty are intermixed.

在执行新任务时,我们经常会运用从以前的类似任务中学到的规则。然而,知道何时归纳以前的知识是一项复杂的挑战。在本研究中,我们研究了视觉搜索任务中学习泛化的特性,重点是搜索难度的作用。我们采用了一种空间概率学习范式,让个体学会优先选择搜索显示中目标出现频率较高的位置(即高概率位置),而不是其他位置(即低概率位置)。在第一个实验中,在训练阶段,我们在区块内混合了容易和困难的搜索试验,每个试验分别与一个不同的高概率位置配对。然后,在测试阶段,我们取消了概率操作,并评估了空间偏差在新的中等难度任务中的泛化情况。结果表明,随着训练的进行,简单搜索比困难搜索更容易引起对高概率位置的空间偏向。此外,在测试中,简单搜索学习比困难搜索学习的泛化程度更高,这表现在前者对高概率位置的偏向性更强。另外两个实验排除了困难搜索学习本身较弱和简单搜索学习特别削弱困难搜索学习的可能性。总之,实验结果表明,当两种搜索难度混合时,简单搜索会干扰困难搜索的学习和泛化。
{"title":"Intermixed levels of visual search difficulty produce asymmetric probability learning","authors":"Bo-Yeong Won,&nbsp;Andrew B. Leber","doi":"10.3758/s13414-024-02897-4","DOIUrl":"10.3758/s13414-024-02897-4","url":null,"abstract":"<div><p>When performing novel tasks, we often apply the rules we have learned from previous, similar tasks. Knowing when to generalize previous knowledge, however, is a complex challenge. In this study, we investigated the properties of learning generalization in a visual search task, focusing on the role of search difficulty. We used a spatial probability learning paradigm in which individuals learn to prioritize their search toward the locations where a target appears more often (i.e., high-probable location) than others (i.e., low-probable location) in a search display. In the first experiment, during a training phase, we intermixed the easy and difficult search trials within blocks, and each was respectively paired with a distinct high-probable location. Then, during a testing phase, we removed the probability manipulation and assessed any generalization of spatial biases to a novel, intermediate difficulty task. Results showed that, as training progressed, the easy search evoked a stronger spatial bias to its high-probable location than the difficult search. Moreover, there was greater generalization of the easy search learning than difficult search learning at test, revealed by a stronger bias toward the former’s high-probable location. Two additional experiments ruled out alternatives that learning during difficult search itself is weak and learning during easy search specifically weakens learning of the difficult search. Overall, the results demonstrate that easy search interferes with difficult search learning and generalizability when the two levels of search difficulty are intermixed.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1545 - 1559"},"PeriodicalIF":1.7,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception of temporal structure in speech is influenced by body movement and individual beat perception ability 对语音中时间结构的感知受身体运动和个人节拍感知能力的影响。
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-20 DOI: 10.3758/s13414-024-02893-8
Tamara Rathcke, Eline Smit, Yue Zheng, Massimiliano Canzi

The subjective experience of time flow in speech deviates from the sound acoustics in substantial ways. The present study focuses on the perceptual tendency to regularize time intervals found in speech but not in other types of sounds with a similar temporal structure. We investigate to what extent individual beat perception ability is responsible for perceptual regularization and if the effect can be eliminated through the involvement of body movement during listening. Participants performed a musical beat perception task and compared spoken sentences to their drumbeat-based versions either after passive listening or after listening and moving along with the beat of the sentences. The results show that the interval regularization prevails in listeners with a low beat perception ability performing a passive listening task and is eliminated in an active listening task involving body movement. Body movement also helped to promote a veridical percept of temporal structure in speech at the group level. We suggest that body movement engages an internal timekeeping mechanism, promoting the fidelity of auditory encoding even in sounds of high temporal complexity and irregularity such as natural speech.

语音中时间流的主观体验与声音的声学特性有很大的偏差。本研究的重点是在语音中发现的时间间隔规则化的知觉倾向,而在具有类似时间结构的其他类型的声音中却没有发现。我们研究了个人节拍感知能力在多大程度上导致了感知正则化,以及这种效应是否可以通过听时的身体运动来消除。受试者进行了一项音乐节拍感知任务,并在被动聆听或聆听并跟随句子节拍移动后,将口语句子与基于鼓点的句子进行比较。结果表明,节拍感知能力较低的听者在进行被动聆听任务时,会出现间隔正则化现象,而在有肢体动作参与的主动聆听任务中,这种现象则会消失。肢体动作还有助于在群体水平上促进对语音中时间结构的真实感知。我们认为,肢体运动能调动内部计时机制,即使在自然语音等时间复杂性和不规则性较高的声音中,也能提高听觉编码的保真度。
{"title":"Perception of temporal structure in speech is influenced by body movement and individual beat perception ability","authors":"Tamara Rathcke,&nbsp;Eline Smit,&nbsp;Yue Zheng,&nbsp;Massimiliano Canzi","doi":"10.3758/s13414-024-02893-8","DOIUrl":"10.3758/s13414-024-02893-8","url":null,"abstract":"<div><p>The subjective experience of time flow in speech deviates from the sound acoustics in substantial ways. The present study focuses on the perceptual tendency to regularize time intervals found in speech but not in other types of sounds with a similar temporal structure. We investigate to what extent individual beat perception ability is responsible for perceptual regularization and if the effect can be eliminated through the involvement of body movement during listening. Participants performed a musical beat perception task and compared spoken sentences to their drumbeat-based versions either after passive listening or after listening and moving along with the beat of the sentences. The results show that the interval regularization prevails in listeners with a low beat perception ability performing a passive listening task and is eliminated in an active listening task involving body movement. Body movement also helped to promote a veridical percept of temporal structure in speech at the group level. We suggest that body movement engages an internal timekeeping mechanism, promoting the fidelity of auditory encoding even in sounds of high temporal complexity and irregularity such as natural speech.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1746 - 1762"},"PeriodicalIF":1.7,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02893-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual differences in emotion-induced blindness: Are they reliable and what do they measure? 情绪致盲的个体差异:它们可靠吗?
IF 1.7 4区 心理学 Q3 PSYCHOLOGY Pub Date : 2024-05-17 DOI: 10.3758/s13414-024-02900-y
Mark Edwards, David Denniston, Camryn Bariesheff, Nicholas J. Wyche, Stephanie C. Goodhew

The emotion-induced-blindness (EIB) paradigm has been extensively used to investigate attentional biases to emotionally salient stimuli. However, the low reliability of EIB scores (the difference in performance between the neutral and emotionally salient condition) limits the effectiveness of the paradigm for investigating individual differences. Here, across two studies, we investigated whether we could improve the reliability of EIB scores. In Experiment 1, we introduced a mid-intensity emotionally salient stimuli condition, with the goal of obtaining a wider range of EIB magnitudes to promote reliability. In Experiment 2, we sought to reduce the attentional oddball effect, so we created a modified EIB paradigm by removing the filler images. Neither of these approaches improved the reliability of the EIB scores. Reliability for the high- and mid-intensity EIB difference scores were low, while reliability of the scores for absolute performance (neutral, high-, and mid-intensity) were high and the scores were also highly correlated, even though overall performance in the emotionally salient conditions were significantly worse than in the neutral conditions. Given these results, we can conclude that while emotionally salient stimuli impair performance in the EIB task compared with the neutral condition, the strong correlation between the emotionally salient and neutral conditions means that while EIB can be used to investigate individual differences in attentional control, it is not selective to individual differences in attentional biases to emotionally salient stimuli.

情绪诱发失明(EIB)范式已被广泛用于研究对情绪突出刺激的注意偏差。然而,EIB得分(中性和情绪突出条件下的表现差异)的可靠性较低,限制了该范式在研究个体差异方面的有效性。在这里,我们通过两项研究探讨了能否提高 EIB 分数的可靠性。在实验 1 中,我们引入了中等强度的情绪突出刺激条件,目的是获得更广泛的 EIB 幅值,以提高可靠性。在实验 2 中,我们试图减少注意力怪球效应,因此我们创建了一个经过修改的 EIB 范式,去掉了填充图像。这两种方法都没有提高EIB评分的可靠性。高强度和中等强度 EIB 差异得分的信度较低,而绝对表现(中性、高强度和中等强度)得分的信度较高,而且得分也高度相关,尽管情绪突出条件下的总体表现明显比中性条件下差。鉴于这些结果,我们可以得出这样的结论:虽然与中性条件相比,情绪突出刺激会影响 EIB 任务的成绩,但情绪突出条件与中性条件之间的强相关性意味着,虽然 EIB 可用于研究注意控制的个体差异,但它对情绪突出刺激的注意偏差的个体差异没有选择性。
{"title":"Individual differences in emotion-induced\u0000 blindness: Are they reliable and what do they measure?","authors":"Mark Edwards,&nbsp;David Denniston,&nbsp;Camryn Bariesheff,&nbsp;Nicholas J. Wyche,&nbsp;Stephanie C. Goodhew","doi":"10.3758/s13414-024-02900-y","DOIUrl":"10.3758/s13414-024-02900-y","url":null,"abstract":"<div><p>The emotion-induced-blindness (EIB) paradigm has been extensively used\u0000 to investigate attentional biases to emotionally salient stimuli. However, the low\u0000 reliability of EIB scores (the difference in performance between the neutral and\u0000 emotionally salient condition) limits the effectiveness of the paradigm for\u0000 investigating individual differences. Here, across two studies, we investigated\u0000 whether we could improve the reliability of EIB scores. In Experiment 1, we introduced a mid-intensity emotionally salient\u0000 stimuli condition, with the goal of obtaining a wider range of EIB magnitudes to\u0000 promote reliability. In Experiment 2, we\u0000 sought to reduce the attentional oddball effect, so we created a modified EIB\u0000 paradigm by removing the filler images. Neither of these approaches improved the\u0000 reliability of the EIB scores. Reliability for the high- and mid-intensity EIB\u0000 difference scores were low, while reliability of the scores for absolute performance\u0000 (neutral, high-, and mid-intensity) were high and the scores were also highly\u0000 correlated, even though overall performance in the emotionally salient conditions\u0000 were significantly worse than in the neutral conditions. Given these results, we can\u0000 conclude that while emotionally salient stimuli impair performance in the EIB task\u0000 compared with the neutral condition, the strong correlation between the emotionally\u0000 salient and neutral conditions means that while EIB can be used to investigate\u0000 individual differences in attentional control, it is not selective to individual\u0000 differences in attentional biases to emotionally salient stimuli.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 5","pages":"1 - 15"},"PeriodicalIF":1.7,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02900-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Attention Perception & Psychophysics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1