首页 > 最新文献

Behavior Research Methods最新文献

英文 中文
Geofencing in location-based behavioral research: Methodology, challenges, and implementation. 基于位置的行为研究中的地理围栏:方法、挑战和实施。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-10-01 Epub Date: 2023-08-25 DOI: 10.3758/s13428-023-02213-2
Yury Shevchenko, Ulf-Dietrich Reips

This manuscript presents a novel geofencing method in behavioral research. Geofencing, built upon geolocation technology, constitutes virtual fences around specific locations. Every time a participant crosses the virtual border around the geofenced area, an event can be triggered on a smartphone, e.g., the participant may be asked to complete a survey. The geofencing method can alleviate the problems of constant location tracking, such as recording sensitive geolocation information and battery drain. In scenarios where locations for geofencing are determined by participants (e.g., home, workplace), no location data need to be transferred to the researcher, so this method can ensure privacy and anonymity. Given the widespread use of smartphones and mobile Internet, geofencing has become a feasible tool in studying human behavior and cognition outside of the laboratory. The method can help advance theoretical and applied psychological science at a new frontier of context-aware research. At the same time, there is a lack of guidance on how and when geofencing can be applied in research. This manuscript aims to fill the gap and ease the adoption of the geofencing method. We describe the current challenges and implementations in geofencing and present three empirical studies in which we evaluated the geofencing method using the Samply application, a tool for mobile experience sampling research. The studies show that sensitivity and precision of geofencing were affected by the type of event, location radius, environment, operating system, and user behavior. Potential implications and recommendations for behavioral research are discussed.

本手稿介绍了一种用于行为研究的新型地理围栏方法。地理围栏建立在地理定位技术的基础上,在特定地点周围形成虚拟围栏。每当参与者越过地理围栏周围的虚拟边界,智能手机上就会触发一个事件,例如,参与者可能会被要求完成一项调查。地理围栏方法可以缓解持续位置跟踪带来的问题,如记录敏感的地理位置信息和电池消耗。在地理围栏的位置由参与者确定的情况下(如家庭、工作场所),无需向研究人员传输位置数据,因此这种方法可以确保隐私和匿名性。鉴于智能手机和移动互联网的广泛使用,地理围栏已成为在实验室外研究人类行为和认知的可行工具。这种方法有助于在情境感知研究的新领域推动理论和应用心理科学的发展。与此同时,在如何以及何时将地理围栏应用于研究方面还缺乏指导。本手稿旨在填补这一空白,促进地理围栏方法的应用。我们描述了地理围栏目前面临的挑战和实施情况,并介绍了三项实证研究,在这些研究中,我们使用移动体验采样研究工具 Samply 应用程序对地理围栏方法进行了评估。研究表明,地理围栏的灵敏度和精确度受事件类型、位置半径、环境、操作系统和用户行为的影响。研究还讨论了对行为研究的潜在影响和建议。
{"title":"Geofencing in location-based behavioral research: Methodology, challenges, and implementation.","authors":"Yury Shevchenko, Ulf-Dietrich Reips","doi":"10.3758/s13428-023-02213-2","DOIUrl":"10.3758/s13428-023-02213-2","url":null,"abstract":"<p><p>This manuscript presents a novel geofencing method in behavioral research. Geofencing, built upon geolocation technology, constitutes virtual fences around specific locations. Every time a participant crosses the virtual border around the geofenced area, an event can be triggered on a smartphone, e.g., the participant may be asked to complete a survey. The geofencing method can alleviate the problems of constant location tracking, such as recording sensitive geolocation information and battery drain. In scenarios where locations for geofencing are determined by participants (e.g., home, workplace), no location data need to be transferred to the researcher, so this method can ensure privacy and anonymity. Given the widespread use of smartphones and mobile Internet, geofencing has become a feasible tool in studying human behavior and cognition outside of the laboratory. The method can help advance theoretical and applied psychological science at a new frontier of context-aware research. At the same time, there is a lack of guidance on how and when geofencing can be applied in research. This manuscript aims to fill the gap and ease the adoption of the geofencing method. We describe the current challenges and implementations in geofencing and present three empirical studies in which we evaluated the geofencing method using the Samply application, a tool for mobile experience sampling research. The studies show that sensitivity and precision of geofencing were affected by the type of event, location radius, environment, operating system, and user behavior. Potential implications and recommendations for behavioral research are discussed.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"6411-6439"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10428016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tradeoffs of estimating reaction time with absolute and relative thresholds. 用绝对阈值和相对阈值估算反应时间的权衡。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-08-25 DOI: 10.3758/s13428-023-02211-4
Jarrod Blinch, Coby Trovinger, Callie R DeWinne, Guilherme de Cellio Martins, Chelsea N Ifediora, Maryam Nourollahimoghadam, John R Harry, Ty B Palmer

Measuring the duration of cognitive processing with reaction time is fundamental to several subfields of psychology. Many methods exist for estimating movement initiation when measuring reaction time, but there is an incomplete understanding of their relative performance. The purpose of the present study was to identify and compare the tradeoffs of 19 estimates of movement initiation across two experiments. We focused our investigation on estimating movement initiation on each trial with filtered kinematic and kinetic data. Nine of the estimates involved absolute thresholds (e.g., acceleration 1000 back to 200 mm/s2, micro push-button switch), and the remaining ten estimates used relative thresholds (e.g., force extrapolation, 5% of maximum velocity). The criteria were the duration of reaction time, immunity to the movement amplitude, responsiveness to visual feedback during movement execution, reliability, and the number of manually corrected trials (efficacy). The three best overall estimates, in descending order, were yank extrapolation, force extrapolation, and acceleration 1000 to 200 mm/s2. The sensitive micro push-button switch, which was the simplest estimate, had a decent overall score, but it was a late estimate of movement initiation. The relative thresholds based on kinematics had the six worst overall scores. An issue with the relative kinematic thresholds was that they were biased by the movement amplitude. In summary, we recommend measuring reaction time on each trial with one of the three best overall estimates of movement initiation. Future research should continue to refine existing estimates while also exploring new ones.

用反应时间测量认知处理的持续时间是心理学多个子领域的基础。在测量反应时间时,有许多方法可用于估算动作启动时间,但人们对这些方法的相对性能了解并不全面。本研究的目的是在两个实验中识别和比较 19 种运动起始估计方法的优劣。我们的研究重点是利用过滤后的运动学和动力学数据对每次试验的运动启动进行估计。其中 9 个估计值涉及绝对阈值(例如,加速度 1000 回 200 mm/s2、微型按钮开关),其余 10 个估计值使用相对阈值(例如,力外推法、最大速度的 5%)。标准是反应时间的持续时间、对动作幅度的免疫力、动作执行过程中对视觉反馈的反应能力、可靠性和人工校正试验的次数(功效)。三个最佳总体估计值从高到低依次是拉力外推法、力外推法和加速度 1000 至 200 mm/s2。灵敏的微型按钮开关是最简单的估计方法,其总分也不错,但对运动开始的估计较晚。以运动学为基础的相对阈值有六个总分最差。运动学相对阈值的一个问题是它们受到运动幅度的影响。总之,我们建议在每次试验中使用对运动起始的三个最佳总体估计中的一个来测量反应时间。未来的研究应继续完善现有的估计值,同时探索新的估计值。
{"title":"Tradeoffs of estimating reaction time with absolute and relative thresholds.","authors":"Jarrod Blinch, Coby Trovinger, Callie R DeWinne, Guilherme de Cellio Martins, Chelsea N Ifediora, Maryam Nourollahimoghadam, John R Harry, Ty B Palmer","doi":"10.3758/s13428-023-02211-4","DOIUrl":"10.3758/s13428-023-02211-4","url":null,"abstract":"<p><p>Measuring the duration of cognitive processing with reaction time is fundamental to several subfields of psychology. Many methods exist for estimating movement initiation when measuring reaction time, but there is an incomplete understanding of their relative performance. The purpose of the present study was to identify and compare the tradeoffs of 19 estimates of movement initiation across two experiments. We focused our investigation on estimating movement initiation on each trial with filtered kinematic and kinetic data. Nine of the estimates involved absolute thresholds (e.g., acceleration 1000 back to 200 mm/s<sup>2</sup>, micro push-button switch), and the remaining ten estimates used relative thresholds (e.g., force extrapolation, 5% of maximum velocity). The criteria were the duration of reaction time, immunity to the movement amplitude, responsiveness to visual feedback during movement execution, reliability, and the number of manually corrected trials (efficacy). The three best overall estimates, in descending order, were yank extrapolation, force extrapolation, and acceleration 1000 to 200 mm/s<sup>2</sup>. The sensitive micro push-button switch, which was the simplest estimate, had a decent overall score, but it was a late estimate of movement initiation. The relative thresholds based on kinematics had the six worst overall scores. An issue with the relative kinematic thresholds was that they were biased by the movement amplitude. In summary, we recommend measuring reaction time on each trial with one of the three best overall estimates of movement initiation. Future research should continue to refine existing estimates while also exploring new ones.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4695-4715"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10128403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LexMAL: A quick and reliable lexical test for Malay speakers. LexMAL:针对马来语者的快速可靠的词汇测试。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-09-01 DOI: 10.3758/s13428-023-02202-5
Soon Tat Lee, Walter J B van Heuven, Jessica M Price, Christine Xiang Ru Leong

Objective language proficiency measures have been found to provide better and more consistent estimates of bilinguals' language processing than self-rated proficiency (e.g., Tomoschuk et al., 2019; Wen & van Heuven, 2017a). However, objectively measuring language proficiency is often not possible because of a lack of quick and freely available language proficiency tests (Park et al., 2022). Therefore, quick valid vocabulary tests, such as LexTALE (Lemhöfer & Broersma, 2012) and its extensions (e.g., LexITA: Amenta et al., 2020; LEXTALE-FR: Brysbaert, 2013; LexPT: Zhou & Li, 2022) have been developed to reliably assess language proficiency of speakers of various languages. The present study introduces a Lexical Test for Malay Speakers (LexMAL), which estimates language proficiency for Malay first language (L1) and second language (L2) speakers. An initial 180-item LexMAL prototype was evaluated using 60 Malay L1 and 60 L2 speakers in Experiment 1. Sixty words and 30 nonwords with the highest discriminative power that span across the full difficulty range were selected for the final LexMAL based on point-biserial correlations and an item response theory analysis. The validity of LexMAL was demonstrated through a reliable discrimination between L1 and L2 speakers, significant correlations between LexMAL scores and performance on other Malay language tasks (i.e., translation accuracy and cloze test scores), and LexMAL outperforming self-rated proficiency. A validation study (Experiment 2) with the 90-item final LexMAL tested with a different group of Malay L1 (N = 61) and L2 speakers (N = 61) replicated the findings of Experiment 1. LexMAL is freely available for researchers at www.lexmal.org .

与自我评定的语言能力相比,客观的语言能力测量能更好、更一致地估计二语者的语言处理能力(如 Tomoschuk 等,2019;Wen & van Heuven,2017a)。然而,客观测量语言能力往往是不可能的,因为缺乏快速、免费的语言能力测试(Park 等人,2022 年)。因此,快速有效的词汇测试,如 LexTALE(Lemhöfer & Broersma,2012 年)及其扩展版本(如 LexITA:Amenta 等人,2020 年;LEXTALE-FR:Brysbaert,2013 年;LexPT:LexPT:Zhou & Li,2022),用于可靠地评估各种语言使用者的语言能力。本研究引入了马来语者词汇测试(LexMAL),用于评估马来语第一语言(L1)和第二语言(L2)使用者的语言能力。在实验 1 中,使用 60 名马来语第一语言使用者和 60 名第二语言使用者对最初的 180 项 LexMAL 原型进行了评估。根据点-比对相关性和项目反应理论分析,最终的 LexMAL 挑选出了 60 个具有最高辨别力的单词和 30 个非单词,它们跨越了整个难度范围。LexMAL 在 L1 和 L2 说话者之间具有可靠的区分度,LexMAL 分数与其他马来语任务的成绩(即翻译准确率和掐词测试分数)之间存在显著相关性,LexMAL 的成绩优于自我评定的熟练程度,这些都证明了 LexMAL 的有效性。一项验证研究(实验 2)在不同的马来语 L1(N = 61)和 L2(N = 61)使用者群体中测试了 90 个项目的最终 LexMAL,重复了实验 1 的结果。研究人员可在 www.lexmal.org 免费获取 LexMAL。
{"title":"LexMAL: A quick and reliable lexical test for Malay speakers.","authors":"Soon Tat Lee, Walter J B van Heuven, Jessica M Price, Christine Xiang Ru Leong","doi":"10.3758/s13428-023-02202-5","DOIUrl":"10.3758/s13428-023-02202-5","url":null,"abstract":"<p><p>Objective language proficiency measures have been found to provide better and more consistent estimates of bilinguals' language processing than self-rated proficiency (e.g., Tomoschuk et al., 2019; Wen & van Heuven, 2017a). However, objectively measuring language proficiency is often not possible because of a lack of quick and freely available language proficiency tests (Park et al., 2022). Therefore, quick valid vocabulary tests, such as LexTALE (Lemhöfer & Broersma, 2012) and its extensions (e.g., LexITA: Amenta et al., 2020; LEXTALE-FR: Brysbaert, 2013; LexPT: Zhou & Li, 2022) have been developed to reliably assess language proficiency of speakers of various languages. The present study introduces a Lexical Test for Malay Speakers (LexMAL), which estimates language proficiency for Malay first language (L1) and second language (L2) speakers. An initial 180-item LexMAL prototype was evaluated using 60 Malay L1 and 60 L2 speakers in Experiment 1. Sixty words and 30 nonwords with the highest discriminative power that span across the full difficulty range were selected for the final LexMAL based on point-biserial correlations and an item response theory analysis. The validity of LexMAL was demonstrated through a reliable discrimination between L1 and L2 speakers, significant correlations between LexMAL scores and performance on other Malay language tasks (i.e., translation accuracy and cloze test scores), and LexMAL outperforming self-rated proficiency. A validation study (Experiment 2) with the 90-item final LexMAL tested with a different group of Malay L1 (N = 61) and L2 speakers (N = 61) replicated the findings of Experiment 1. LexMAL is freely available for researchers at www.lexmal.org .</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4563-4581"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289131/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10135765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test-retest reliability of reinforcement learning parameters. 强化学习参数的重测可靠性。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-09-08 DOI: 10.3758/s13428-023-02203-4
Jessica V Schaaf, Laura Weidinger, Lucas Molleman, Wouter van den Bos

It has recently been suggested that parameter estimates of computational models can be used to understand individual differences at the process level. One area of research in which this approach, called computational phenotyping, has taken hold is computational psychiatry. One requirement for successful computational phenotyping is that behavior and parameters are stable over time. Surprisingly, the test-retest reliability of behavior and model parameters remains unknown for most experimental tasks and models. The present study seeks to close this gap by investigating the test-retest reliability of canonical reinforcement learning models in the context of two often-used learning paradigms: a two-armed bandit and a reversal learning task. We tested independent cohorts for the two tasks (N = 69 and N = 47) via an online testing platform with a between-test interval of five weeks. Whereas reliability was high for personality and cognitive measures (with ICCs ranging from .67 to .93), it was generally poor for the parameter estimates of the reinforcement learning models (with ICCs ranging from .02 to .52 for the bandit task and from .01 to .71 for the reversal learning task). Given that simulations indicated that our procedures could detect high test-retest reliability, this suggests that a significant proportion of the variability must be ascribed to the participants themselves. In support of that hypothesis, we show that mood (stress and happiness) can partly explain within-participant variability. Taken together, these results are critical for current practices in computational phenotyping and suggest that individual variability should be taken into account in the future development of the field.

最近有人提出,可以利用计算模型的参数估计来理解过程层面的个体差异。这种方法被称为 "计算表型",其研究领域之一是计算精神病学。成功进行计算表型的一个条件是,行为和参数在一段时间内保持稳定。令人惊讶的是,对于大多数实验任务和模型来说,行为和模型参数的重复测试可靠性仍是未知数。本研究试图填补这一空白,在两个常用的学习范式(双臂匪徒和逆转学习任务)中研究典型强化学习模型的重复测试可靠性。我们通过在线测试平台对两个任务的独立组群(N = 69 和 N = 47)进行了测试,测试间隔为五周。虽然人格和认知测量的可靠性较高(ICC 在 0.67 到 0.93 之间),但强化学习模型参数估计的可靠性普遍较低(强盗任务的 ICC 在 0.02 到 0.52 之间,逆向学习任务的 ICC 在 0.01 到 0.71 之间)。鉴于模拟结果表明我们的程序可以检测到很高的测试-再测可靠性,这表明很大一部分变异性必须归因于参与者本身。为了支持这一假设,我们证明了情绪(压力和快乐)可以部分解释参与者内部的变异性。综上所述,这些结果对当前的计算表型实践至关重要,并表明在该领域的未来发展中应考虑个体变异性。
{"title":"Test-retest reliability of reinforcement learning parameters.","authors":"Jessica V Schaaf, Laura Weidinger, Lucas Molleman, Wouter van den Bos","doi":"10.3758/s13428-023-02203-4","DOIUrl":"10.3758/s13428-023-02203-4","url":null,"abstract":"<p><p>It has recently been suggested that parameter estimates of computational models can be used to understand individual differences at the process level. One area of research in which this approach, called computational phenotyping, has taken hold is computational psychiatry. One requirement for successful computational phenotyping is that behavior and parameters are stable over time. Surprisingly, the test-retest reliability of behavior and model parameters remains unknown for most experimental tasks and models. The present study seeks to close this gap by investigating the test-retest reliability of canonical reinforcement learning models in the context of two often-used learning paradigms: a two-armed bandit and a reversal learning task. We tested independent cohorts for the two tasks (N = 69 and N = 47) via an online testing platform with a between-test interval of five weeks. Whereas reliability was high for personality and cognitive measures (with ICCs ranging from .67 to .93), it was generally poor for the parameter estimates of the reinforcement learning models (with ICCs ranging from .02 to .52 for the bandit task and from .01 to .71 for the reversal learning task). Given that simulations indicated that our procedures could detect high test-retest reliability, this suggests that a significant proportion of the variability must be ascribed to the participants themselves. In support of that hypothesis, we show that mood (stress and happiness) can partly explain within-participant variability. Taken together, these results are critical for current practices in computational phenotyping and suggest that individual variability should be taken into account in the future development of the field.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4582-4599"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10540135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
It is not real until it feels real: Testing a new method for simulation of eyewitness experience with virtual reality technology and equipment. 感觉不到真实,才算真实:用虚拟现实技术和设备测试一种模拟目击者体验的新方法。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-07-28 DOI: 10.3758/s13428-023-02186-2
Kaja Glomb, Przemysław Piotrowski, Izabela Anna Romanowska

Laboratory research in the psychology of witness testimony is often criticized for its lack of ecological validity, including the use of unrealistic artificial stimuli to test memory performance. The purpose of our study is to present a method that can provide an intermediary between laboratory research and field studies or naturalistic experiments that are difficult to control and administer. It uses Video-360° technology and virtual reality (VR) equipment, which cuts subjects off from external stimuli and gives them control over the visual field. This can potentially increase the realism of the eyewitness's experience. To test the method, we conducted an experiment comparing the immersion effect, emotional response, and memory performance between subjects who watched a video presenting a mock crime on a head-mounted display (VR goggles; n = 57) and a screen (n = 50). The results suggest that, compared to those who watched the video on a screen, the VR group had a deeper sense of immersion, that is, of being part of the scene presented. At the same time, they were not distracted or cognitively overloaded by the more complex virtual environment, and remembered just as much detail about the crime as those viewing it on the screen. Additionally, we noted significant differences between subjects in ratings of emotions felt during the video. This may suggest that the two formats evoke different types of discrete emotions. Overall, the results confirm the usefulness of the proposed method in witness research.

证人证词心理学的实验室研究经常因缺乏生态有效性而受到批评,包括使用不现实的人工刺激来测试记忆表现。我们研究的目的是提出一种方法,可以在实验室研究和难以控制和管理的实地研究或自然实验之间提供中介。它使用视频360°技术和虚拟现实(VR)设备,使受试者与外部刺激隔绝,并使他们控制视野。这可能会增加目击证人经历的真实性。为了验证该方法,我们进行了一项实验,比较了在头戴式显示器上观看模拟犯罪视频(VR goggles;N = 57)和屏幕(N = 50)。结果表明,与那些在屏幕上观看视频的人相比,VR组有更深的沉浸感,即成为所呈现场景的一部分。与此同时,他们并没有因为更复杂的虚拟环境而分心或认知超载,而且与那些在屏幕上观看犯罪的人一样,他们记住了更多的犯罪细节。此外,我们注意到受试者在视频中感受到的情绪评分上存在显著差异。这可能表明这两种格式唤起了不同类型的离散情感。总体而言,结果证实了所提出的方法在证人研究中的有效性。
{"title":"It is not real until it feels real: Testing a new method for simulation of eyewitness experience with virtual reality technology and equipment.","authors":"Kaja Glomb, Przemysław Piotrowski, Izabela Anna Romanowska","doi":"10.3758/s13428-023-02186-2","DOIUrl":"10.3758/s13428-023-02186-2","url":null,"abstract":"<p><p>Laboratory research in the psychology of witness testimony is often criticized for its lack of ecological validity, including the use of unrealistic artificial stimuli to test memory performance. The purpose of our study is to present a method that can provide an intermediary between laboratory research and field studies or naturalistic experiments that are difficult to control and administer. It uses Video-360° technology and virtual reality (VR) equipment, which cuts subjects off from external stimuli and gives them control over the visual field. This can potentially increase the realism of the eyewitness's experience. To test the method, we conducted an experiment comparing the immersion effect, emotional response, and memory performance between subjects who watched a video presenting a mock crime on a head-mounted display (VR goggles; n = 57) and a screen (n = 50). The results suggest that, compared to those who watched the video on a screen, the VR group had a deeper sense of immersion, that is, of being part of the scene presented. At the same time, they were not distracted or cognitively overloaded by the more complex virtual environment, and remembered just as much detail about the crime as those viewing it on the screen. Additionally, we noted significant differences between subjects in ratings of emotions felt during the video. This may suggest that the two formats evoke different types of discrete emotions. Overall, the results confirm the usefulness of the proposed method in witness research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4336-4350"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9942586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Tobii Pro Glasses 2 and 3 in static and dynamic conditions. 在静态和动态条件下评估 Tobii Pro 眼镜 2 和 3。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-08-07 DOI: 10.3758/s13428-023-02173-7
V Onkhar, D Dodou, J C F de Winter

Over the past few decades, there have been significant developments in eye-tracking technology, particularly in the domain of mobile, head-mounted devices. Nevertheless, questions remain regarding the accuracy of these eye-trackers during static and dynamic tasks. In light of this, we evaluated the performance of two widely used devices: Tobii Pro Glasses 2 and Tobii Pro Glasses 3. A total of 36 participants engaged in tasks under three dynamicity conditions. In the "seated with a chinrest" trial, only the eyes could be moved; in the "seated without a chinrest" trial, both the head and the eyes were free to move; and during the walking trial, participants walked along a straight path. During the seated trials, participants' gaze was directed towards dots on a wall by means of audio instructions, whereas in the walking trial, participants maintained their gaze on a bullseye while walking towards it. Eye-tracker accuracy was determined using computer vision techniques to identify the target within the scene camera image. The findings showed that Tobii 3 outperformed Tobii 2 in terms of accuracy during the walking trials. Moreover, the results suggest that employing a chinrest in the case of head-mounted eye-trackers is counterproductive, as it necessitates larger eye eccentricities for target fixation, thereby compromising accuracy compared to not using a chinrest, which allows for head movement. Lastly, it was found that participants who reported higher workload demonstrated poorer eye-tracking accuracy. The current findings may be useful in the design of experiments that involve head-mounted eye-trackers.

过去几十年来,眼动跟踪技术取得了长足的发展,尤其是在移动头戴式设备领域。然而,这些眼动仪在静态和动态任务中的准确性仍然存在问题。有鉴于此,我们对两款广泛使用的设备进行了性能评估:Tobii Pro Glasses 2 和 Tobii Pro Glasses 3。共有 36 名参与者参与了三种动态条件下的任务。在 "带下巴托的坐姿 "试验中,只有眼睛可以移动;在 "无下巴托的坐姿 "试验中,头部和眼睛都可以自由移动;在步行试验中,参与者沿着笔直的路径行走。在 "坐姿无下巴垫 "试验中,参与者的视线通过声音指令指向墙上的点,而在步行试验中,参与者在走向靶心的同时将视线停留在靶心上。眼动仪的准确性是通过计算机视觉技术在场景摄像机图像中识别目标来确定的。研究结果表明,在行走试验中,Tobii 3 的准确性优于 Tobii 2。此外,研究结果表明,在使用头戴式眼动追踪器时,使用下巴托会适得其反,因为与不使用下巴托(允许头部移动)相比,使用下巴托需要更大的眼球偏心率来固定目标,从而影响准确性。最后,研究还发现,工作负荷较大的参与者眼动追踪的准确性较差。目前的研究结果可能有助于设计涉及头戴式眼动追踪器的实验。
{"title":"Evaluating the Tobii Pro Glasses 2 and 3 in static and dynamic conditions.","authors":"V Onkhar, D Dodou, J C F de Winter","doi":"10.3758/s13428-023-02173-7","DOIUrl":"10.3758/s13428-023-02173-7","url":null,"abstract":"<p><p>Over the past few decades, there have been significant developments in eye-tracking technology, particularly in the domain of mobile, head-mounted devices. Nevertheless, questions remain regarding the accuracy of these eye-trackers during static and dynamic tasks. In light of this, we evaluated the performance of two widely used devices: Tobii Pro Glasses 2 and Tobii Pro Glasses 3. A total of 36 participants engaged in tasks under three dynamicity conditions. In the \"seated with a chinrest\" trial, only the eyes could be moved; in the \"seated without a chinrest\" trial, both the head and the eyes were free to move; and during the walking trial, participants walked along a straight path. During the seated trials, participants' gaze was directed towards dots on a wall by means of audio instructions, whereas in the walking trial, participants maintained their gaze on a bullseye while walking towards it. Eye-tracker accuracy was determined using computer vision techniques to identify the target within the scene camera image. The findings showed that Tobii 3 outperformed Tobii 2 in terms of accuracy during the walking trials. Moreover, the results suggest that employing a chinrest in the case of head-mounted eye-trackers is counterproductive, as it necessitates larger eye eccentricities for target fixation, thereby compromising accuracy compared to not using a chinrest, which allows for head movement. Lastly, it was found that participants who reported higher workload demonstrated poorer eye-tracking accuracy. The current findings may be useful in the design of experiments that involve head-mounted eye-trackers.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4221-4238"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9951628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mouth and facial informativeness norms for 2276 English words. 2276 个英语单词的口型和面部信息量标准。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-08-21 DOI: 10.3758/s13428-023-02216-z
Anna Krason, Ye Zhang, Hillarie Man, Gabriella Vigliocco

Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors). Here, we present mouth and facial informativeness (MaFI) for words, i.e., how visually informative words are based on their corresponding mouth and facial movements. MaFI was quantified for 2276 English words, varying in length, frequency, and age of acquisition, using phonological distance between a word and participants' speechreading guesses. The results showed that MaFI norms capture well the dynamic nature of mouth and facial movements per word, with words containing phonemes with roundness and frontness features, as well as visemes characterized by lower lip tuck, lip rounding, and lip closure being visually more informative. We also showed that the more of these features there are in a word, the more informative it is based on mouth and facial movements. Finally, we demonstrated that the MaFI norms generalize across different variants of English language. The norms are freely accessible via Open Science Framework ( https://osf.io/mna8j/ ) and can benefit any language researcher using audiovisual stimuli (e.g., to control for the effect of speech-linked mouth and facial movements).

嘴部和面部动作是面对面交流的重要组成部分。评估它们在语音感知中的作用的主要方法是操纵它们的存在(例如,模糊说话者嘴唇的区域),或观察不同的口型对相应音素(或视觉音素;例如,/b/在视觉上比/g/更突出)的信息量。然而,由于共同发音和语言的差异(仅列举几个因素),要超越单个音素的信息量是具有挑战性的。在此,我们介绍了单词的口部和面部信息量(MaFI),即根据相应的口部和面部动作,单词的视觉信息量有多大。我们利用单词与参与者语音阅读猜测之间的语音距离,对长度、频率和习得年龄各不相同的 2276 个英语单词的 MaFI 进行了量化。结果表明,MaFI 标准能很好地捕捉每个单词的口腔和面部动作的动态性质,其中包含具有圆度和前度特征的音素的单词,以及具有下唇内收、圆唇和闭唇特征的视觉音素的单词,视觉信息量更大。我们还证明,单词中的这些特征越多,基于嘴部和面部动作的信息量就越大。最后,我们还证明了 MaFI 标准在不同英语变体中的通用性。这些规范可通过开放科学框架(https://osf.io/mna8j/)免费获取,任何使用视听刺激的语言研究人员都能从中受益(例如,控制与语音相关的嘴部和面部动作的影响)。
{"title":"Mouth and facial informativeness norms for 2276 English words.","authors":"Anna Krason, Ye Zhang, Hillarie Man, Gabriella Vigliocco","doi":"10.3758/s13428-023-02216-z","DOIUrl":"10.3758/s13428-023-02216-z","url":null,"abstract":"<p><p>Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors). Here, we present mouth and facial informativeness (MaFI) for words, i.e., how visually informative words are based on their corresponding mouth and facial movements. MaFI was quantified for 2276 English words, varying in length, frequency, and age of acquisition, using phonological distance between a word and participants' speechreading guesses. The results showed that MaFI norms capture well the dynamic nature of mouth and facial movements per word, with words containing phonemes with roundness and frontness features, as well as visemes characterized by lower lip tuck, lip rounding, and lip closure being visually more informative. We also showed that the more of these features there are in a word, the more informative it is based on mouth and facial movements. Finally, we demonstrated that the MaFI norms generalize across different variants of English language. The norms are freely accessible via Open Science Framework ( https://osf.io/mna8j/ ) and can benefit any language researcher using audiovisual stimuli (e.g., to control for the effect of speech-linked mouth and facial movements).</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4786-4801"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289175/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10042051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ESMira: A decentralized open-source application for collecting experience sampling data. ESMira:一个分散的开源应用程序,用于收集经验采样数据。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-08-21 DOI: 10.3758/s13428-023-02194-2
David Lewetz, Stefan Stieger

This paper introduces ESMira, a server and mobile app (Android, iOS) developed for research projects using experience sampling method (ESM) designs. ESMira offers a very simple setup process and ease of use, while being free, decentralized, and open-source (source code is available on GitHub). The ongoing development of ESMira started in early 2019, with a focus on scientific requirements (e.g., informed consent, ethical considerations), data security (e.g., encryption), and data anonymity (e.g., completely anonymous data workflow). ESMira sets itself apart from other platforms by both being free of charge and providing study administrators with full control over study data without the need for specific technological skills (e.g., programming). This means that study administrators can have ESMira running on their own webspace without needing much technical knowledge, allowing them to remain independent from any third-party service. Furthermore, ESMira offers an extensive list of features (e.g., an anonymous built-in chat to contact participants; a reward system that allows participant incentivization without breaching anonymity; live graphical feedback for participants) and can deal with complex study designs (e.g., nested time-based sampling). In this paper, we illustrate the basic structure of ESMira, explain how to set up a new server and create studies, and introduce the platform's basic functionalities.

本文介绍了ESMira,一个为研究项目开发的服务器和移动应用程序(Android, iOS),采用经验抽样法(ESM)设计。ESMira提供了一个非常简单的设置过程和易用性,同时是免费的、分散的和开源的(源代码在GitHub上可用)。ESMira的持续开发始于2019年初,重点是科学要求(例如,知情同意,道德考虑),数据安全性(例如,加密)和数据匿名性(例如,完全匿名的数据工作流)。ESMira与其他平台的区别在于,它是免费的,并且为研究管理员提供了对研究数据的完全控制,而不需要特定的技术技能(例如编程)。这意味着研究管理员可以在自己的网站上运行ESMira,而不需要太多的技术知识,使他们能够独立于任何第三方服务。此外,ESMira提供了一个广泛的功能列表(例如,一个匿名的内置聊天联系参与者;奖励系统既能激励参与者,又不会破坏匿名性;参与者的实时图形反馈),可以处理复杂的研究设计(例如,嵌套的基于时间的抽样)。在本文中,我们阐述了ESMira的基本结构,解释了如何建立一个新的服务器和创建研究,并介绍了平台的基本功能。
{"title":"ESMira: A decentralized open-source application for collecting experience sampling data.","authors":"David Lewetz, Stefan Stieger","doi":"10.3758/s13428-023-02194-2","DOIUrl":"10.3758/s13428-023-02194-2","url":null,"abstract":"<p><p>This paper introduces ESMira, a server and mobile app (Android, iOS) developed for research projects using experience sampling method (ESM) designs. ESMira offers a very simple setup process and ease of use, while being free, decentralized, and open-source (source code is available on GitHub). The ongoing development of ESMira started in early 2019, with a focus on scientific requirements (e.g., informed consent, ethical considerations), data security (e.g., encryption), and data anonymity (e.g., completely anonymous data workflow). ESMira sets itself apart from other platforms by both being free of charge and providing study administrators with full control over study data without the need for specific technological skills (e.g., programming). This means that study administrators can have ESMira running on their own webspace without needing much technical knowledge, allowing them to remain independent from any third-party service. Furthermore, ESMira offers an extensive list of features (e.g., an anonymous built-in chat to contact participants; a reward system that allows participant incentivization without breaching anonymity; live graphical feedback for participants) and can deal with complex study designs (e.g., nested time-based sampling). In this paper, we illustrate the basic structure of ESMira, explain how to set up a new server and create studies, and introduce the platform's basic functionalities.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4421-4434"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11288990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10042052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimal reporting guideline for research involving eye tracking (2023 edition). 眼动追踪研究的最低报告指南(2023 年版)。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-07-28 DOI: 10.3758/s13428-023-02187-1
Matt J Dunn, Robert G Alexander, Onyekachukwu M Amiebenomo, Gemma Arblaster, Denize Atan, Jonathan T Erichsen, Ulrich Ettinger, Mario E Giardini, Iain D Gilchrist, Ruth Hamilton, Roy S Hessels, Scott Hodgins, Ignace T C Hooge, Brooke S Jackson, Helena Lee, Stephen L Macknik, Susana Martinez-Conde, Lee Mcilreavy, Lisa M Muratori, Diederick C Niehorster, Marcus Nyström, Jorge Otero-Millan, Michael M Schlüssel, Jay E Self, Tarkeshwar Singh, Nikolaos Smyrnis, Andreas Sprenger

A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.

本指南包括在涉及眼动仪和人类或非人类灵长类动物参与者的研究中应报告的最低限度项目。本指南的制定历时 3 年,通过向国际眼动追踪界发出公开邀请,在达成共识的基础上制定而成。本指南最多每 4 年审查一次。
{"title":"Minimal reporting guideline for research involving eye tracking (2023 edition).","authors":"Matt J Dunn, Robert G Alexander, Onyekachukwu M Amiebenomo, Gemma Arblaster, Denize Atan, Jonathan T Erichsen, Ulrich Ettinger, Mario E Giardini, Iain D Gilchrist, Ruth Hamilton, Roy S Hessels, Scott Hodgins, Ignace T C Hooge, Brooke S Jackson, Helena Lee, Stephen L Macknik, Susana Martinez-Conde, Lee Mcilreavy, Lisa M Muratori, Diederick C Niehorster, Marcus Nyström, Jorge Otero-Millan, Michael M Schlüssel, Jay E Self, Tarkeshwar Singh, Nikolaos Smyrnis, Andreas Sprenger","doi":"10.3758/s13428-023-02187-1","DOIUrl":"10.3758/s13428-023-02187-1","url":null,"abstract":"<p><p>A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4351-4357"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11225961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10246790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Great minds think alike: New measures to quantify the similarity of recalls. 大智若愚:量化回忆相似性的新措施。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2024-08-01 Epub Date: 2023-08-01 DOI: 10.3758/s13428-023-02174-6
Alexandra F Ortmann, Michael T Bixter, Christian C Luhmann

Given the recent interest in how memory operates in social contexts, it is more important than ever to meaningfully measure the similarity between recall sequences of different individuals. Similarity of recall sequences of different individuals has been quantified using primarily order-agnostic and some order-sensitive measures specific to memory research without agreement on any one preferred measure. However, edit distance measures have not been used to quantify the similarity of recall sequences in collaborative memory studies. In the current study, we review a broad range of similarity measures, highlighting commonalities and differences. Using simulations and behavioral data, we show that edit distances do measure a memory-relevant factor of similarity and capture information distinct from that captured by order-agnostic measures. We answer illustrative research questions which demonstrate potential applications of edit distances in collaborative and individual memory settings and reveal the unique impact collaboration has on similarity.

鉴于最近人们对记忆如何在社会环境中运行产生了浓厚的兴趣,有意义地测量不同个体的回忆序列之间的相似性比以往任何时候都更加重要。不同个体的回忆序列的相似性主要使用记忆研究中特有的顺序无关性和一些顺序敏感性测量方法进行量化,但并没有就任何一种首选测量方法达成一致。然而,在协作记忆研究中,编辑距离测量法尚未被用于量化回忆序列的相似性。在当前的研究中,我们回顾了大量的相似性测量方法,强调了它们的共性和差异。通过模拟和行为数据,我们表明编辑距离确实能测量与记忆相关的相似性因素,并能捕捉到与顺序无关的测量所捕捉到的信息不同的信息。我们回答了一些说明性研究问题,这些问题展示了编辑距离在协作和个体记忆环境中的潜在应用,并揭示了协作对相似性的独特影响。
{"title":"Great minds think alike: New measures to quantify the similarity of recalls.","authors":"Alexandra F Ortmann, Michael T Bixter, Christian C Luhmann","doi":"10.3758/s13428-023-02174-6","DOIUrl":"10.3758/s13428-023-02174-6","url":null,"abstract":"<p><p>Given the recent interest in how memory operates in social contexts, it is more important than ever to meaningfully measure the similarity between recall sequences of different individuals. Similarity of recall sequences of different individuals has been quantified using primarily order-agnostic and some order-sensitive measures specific to memory research without agreement on any one preferred measure. However, edit distance measures have not been used to quantify the similarity of recall sequences in collaborative memory studies. In the current study, we review a broad range of similarity measures, highlighting commonalities and differences. Using simulations and behavioral data, we show that edit distances do measure a memory-relevant factor of similarity and capture information distinct from that captured by order-agnostic measures. We answer illustrative research questions which demonstrate potential applications of edit distances in collaborative and individual memory settings and reveal the unique impact collaboration has on similarity.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4239-4254"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10277673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Behavior Research Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1