Pub Date : 2024-08-01Epub Date: 2023-09-11DOI: 10.3758/s13428-023-02225-y
Nikita Thomas, Jennifer H Acton, Jonathan T Erichsen, Tony Redmond, Matt J Dunn
Standard automated perimetry, a psychophysical task performed routinely in eyecare clinics, requires observers to maintain fixation for several minutes at a time in order to measure visual field sensitivity. Detection of visual field damage is confounded by eye movements, making the technique unreliable in poorly attentive individuals and those with pathologically unstable fixation, such as nystagmus. Microperimetry, which utilizes 'partial gaze-contingency' (PGC), aims to counteract eye movements but only corrects for gaze position errors prior to each stimulus onset. Here, we present a novel method of visual field examination in which stimulus position is updated during presentation, which we refer to as 'continuous gaze-contingency' (CGC). In the first part of this study, we present three case examples that demonstrate the ability of CGC to measure the edges of the physiological blind spot in infantile nystagmus with greater accuracy than PGC and standard 'no gaze-contingency' (NoGC), as initial proof-of-concept for the utility of the paradigm in measurements of absolute scotomas in these individuals. The second part of this study focused on healthy observers, in which we demonstrate that CGC has the lowest stimulus positional error (gaze-contingent precision: CGC = ± 0.29°, PGC = ± 0.54°, NoGC = ± 0.81°). CGC test-retest variability was shown to be at least as good as both PGC and NoGC. Overall, CGC is supported as a reliable method of visual field examination in healthy observers. Preliminary findings demonstrate the spatially accurate estimation of visual field thresholds related to retinal structure using CGC in individuals with infantile nystagmus.
{"title":"Reliability of gaze-contingent perimetry.","authors":"Nikita Thomas, Jennifer H Acton, Jonathan T Erichsen, Tony Redmond, Matt J Dunn","doi":"10.3758/s13428-023-02225-y","DOIUrl":"10.3758/s13428-023-02225-y","url":null,"abstract":"<p><p>Standard automated perimetry, a psychophysical task performed routinely in eyecare clinics, requires observers to maintain fixation for several minutes at a time in order to measure visual field sensitivity. Detection of visual field damage is confounded by eye movements, making the technique unreliable in poorly attentive individuals and those with pathologically unstable fixation, such as nystagmus. Microperimetry, which utilizes 'partial gaze-contingency' (PGC), aims to counteract eye movements but only corrects for gaze position errors prior to each stimulus onset. Here, we present a novel method of visual field examination in which stimulus position is updated during presentation, which we refer to as 'continuous gaze-contingency' (CGC). In the first part of this study, we present three case examples that demonstrate the ability of CGC to measure the edges of the physiological blind spot in infantile nystagmus with greater accuracy than PGC and standard 'no gaze-contingency' (NoGC), as initial proof-of-concept for the utility of the paradigm in measurements of absolute scotomas in these individuals. The second part of this study focused on healthy observers, in which we demonstrate that CGC has the lowest stimulus positional error (gaze-contingent precision: CGC = ± 0.29°, PGC = ± 0.54°, NoGC = ± 0.81°). CGC test-retest variability was shown to be at least as good as both PGC and NoGC. Overall, CGC is supported as a reliable method of visual field examination in healthy observers. Preliminary findings demonstrate the spatially accurate estimation of visual field thresholds related to retinal structure using CGC in individuals with infantile nystagmus.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4883-4892"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289009/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10216096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-09-11DOI: 10.3758/s13428-023-02195-1
Susannah B F Paletz, Ewa M Golonka, Nick B Pandža, Grace Stanton, David Ryan, Nikki Adams, C Anton Rytting, Egle E Murauskaite, Cody Buntain, Michael A Johns, Petra Bradley
The proper measurement of emotion is vital to understanding the relationship between emotional expression in social media and other factors, such as online information sharing. This work develops a standardized annotation scheme for quantifying emotions in social media using recent emotion theory and research. Human annotators assessed both social media posts and their own reactions to the posts' content on scales of 0 to 100 for each of 20 (Study 1) and 23 (Study 2) emotions. For Study 1, we analyzed English-language posts from Twitter (N = 244) and YouTube (N = 50). Associations between emotion ratings and text-based measures (LIWC, VADER, EmoLex, NRC-EIL, Emotionality) demonstrated convergent and discriminant validity. In Study 2, we tested an expanded version of the scheme in-country, in-language, on Polish (N = 3648) and Lithuanian (N = 1934) multimedia Facebook posts. While the correlations were lower than with English, patterns of convergent and discriminant validity with EmoLex and NRC-EIL still held. Coder reliability was strong across samples, with intraclass correlations of .80 or higher for 10 different emotions in Study 1 and 16 different emotions in Study 2. This research improves the measurement of emotions in social media to include more dimensions, multimedia, and context compared to prior schemes.
{"title":"Social media emotions annotation guide (SMEmo): Development and initial validity.","authors":"Susannah B F Paletz, Ewa M Golonka, Nick B Pandža, Grace Stanton, David Ryan, Nikki Adams, C Anton Rytting, Egle E Murauskaite, Cody Buntain, Michael A Johns, Petra Bradley","doi":"10.3758/s13428-023-02195-1","DOIUrl":"10.3758/s13428-023-02195-1","url":null,"abstract":"<p><p>The proper measurement of emotion is vital to understanding the relationship between emotional expression in social media and other factors, such as online information sharing. This work develops a standardized annotation scheme for quantifying emotions in social media using recent emotion theory and research. Human annotators assessed both social media posts and their own reactions to the posts' content on scales of 0 to 100 for each of 20 (Study 1) and 23 (Study 2) emotions. For Study 1, we analyzed English-language posts from Twitter (N = 244) and YouTube (N = 50). Associations between emotion ratings and text-based measures (LIWC, VADER, EmoLex, NRC-EIL, Emotionality) demonstrated convergent and discriminant validity. In Study 2, we tested an expanded version of the scheme in-country, in-language, on Polish (N = 3648) and Lithuanian (N = 1934) multimedia Facebook posts. While the correlations were lower than with English, patterns of convergent and discriminant validity with EmoLex and NRC-EIL still held. Coder reliability was strong across samples, with intraclass correlations of .80 or higher for 10 different emotions in Study 1 and 16 different emotions in Study 2. This research improves the measurement of emotions in social media to include more dimensions, multimedia, and context compared to prior schemes.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4435-4485"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10210680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-08-01DOI: 10.3758/s13428-023-02185-3
Jordi Manuello, Donato Liloia, Annachiara Crocetta, Franco Cauda, Tommaso Costa
Coordinate-based meta-analysis (CBMA) is a powerful technique in the field of human brain imaging research. Due to its intense usage, several procedures for data preparation and post hoc analyses have been proposed so far. However, these steps are often performed manually by the researcher, and are therefore potentially prone to error and time-consuming. We hence developed the Coordinate-Based Meta-Analyses Toolbox (CBMAT) to provide a suite of user-friendly and automated MATLAB® functions allowing one to perform all these procedures in a fast, reproducible and reliable way. Besides the description of the code, in the present paper we also provide an annotated example of using CBMAT on a dataset including 34 experiments. CBMAT can therefore substantially improve the way data are handled when performing CBMAs. The code can be downloaded from https://github.com/Jordi-Manuello/CBMAT.git .
{"title":"CBMAT: a MATLAB toolbox for data preparation and post hoc analyses in neuroimaging meta-analyses.","authors":"Jordi Manuello, Donato Liloia, Annachiara Crocetta, Franco Cauda, Tommaso Costa","doi":"10.3758/s13428-023-02185-3","DOIUrl":"10.3758/s13428-023-02185-3","url":null,"abstract":"<p><p>Coordinate-based meta-analysis (CBMA) is a powerful technique in the field of human brain imaging research. Due to its intense usage, several procedures for data preparation and post hoc analyses have been proposed so far. However, these steps are often performed manually by the researcher, and are therefore potentially prone to error and time-consuming. We hence developed the Coordinate-Based Meta-Analyses Toolbox (CBMAT) to provide a suite of user-friendly and automated MATLAB® functions allowing one to perform all these procedures in a fast, reproducible and reliable way. Besides the description of the code, in the present paper we also provide an annotated example of using CBMAT on a dataset including 34 experiments. CBMAT can therefore substantially improve the way data are handled when performing CBMAs. The code can be downloaded from https://github.com/Jordi-Manuello/CBMAT.git .</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4325-4335"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11519206/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10277674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-10-11DOI: 10.3758/s13428-023-02249-4
Celina I von Eiff, Julian Kauk, Stefan R Schweinberger
We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (McorrAV = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.
{"title":"The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities.","authors":"Celina I von Eiff, Julian Kauk, Stefan R Schweinberger","doi":"10.3758/s13428-023-02249-4","DOIUrl":"10.3758/s13428-023-02249-4","url":null,"abstract":"<p><p>We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (M<sub>corrAV</sub> = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"5103-5115"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289065/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41189677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-10-05DOI: 10.3758/s13428-023-02224-z
Shu Fai Cheung, Sing-Hang Cheung
Mediation, moderation, and moderated mediation are common in behavioral research models. Several tools are available for estimating indirect effects, conditional effects, and conditional indirect effects and forming their confidence intervals. However, there are no simple-to-use tools that can appropriately form the bootstrapping confidence interval for standardized conditional indirect effects. Moreover, some tools are restricted to a limited type of models. We developed an R package, manymome, which can be used to estimate and form confidence intervals for indirect effects, conditional effects, and conditional indirect effects, standardized or not, using a two-step approach: model parameters are estimated either by structural equation modeling using lavaan or by a set of linear regression models using lm, and then the coefficients are used to compute the requested effects and form confidence intervals. It can be used when there are missing data if the model is fitted by structural equation modeling. There are only a few limitations on some aspects of a model, and no inherent limitations on the number of predictors, the number of independent variables, or the number of moderators and mediators. The goal is to have a tool that allows researchers to focus on model fitting first and worry about estimating the effects later. The use of the model is illustrated using a few numerical examples, and the limitations of the package are discussed.
{"title":"manymome: An R package for computing the indirect effects, conditional effects, and conditional indirect effects, standardized or unstandardized, and their bootstrap confidence intervals, in many (though not all) models.","authors":"Shu Fai Cheung, Sing-Hang Cheung","doi":"10.3758/s13428-023-02224-z","DOIUrl":"10.3758/s13428-023-02224-z","url":null,"abstract":"<p><p>Mediation, moderation, and moderated mediation are common in behavioral research models. Several tools are available for estimating indirect effects, conditional effects, and conditional indirect effects and forming their confidence intervals. However, there are no simple-to-use tools that can appropriately form the bootstrapping confidence interval for standardized conditional indirect effects. Moreover, some tools are restricted to a limited type of models. We developed an R package, manymome, which can be used to estimate and form confidence intervals for indirect effects, conditional effects, and conditional indirect effects, standardized or not, using a two-step approach: model parameters are estimated either by structural equation modeling using lavaan or by a set of linear regression models using lm, and then the coefficients are used to compute the requested effects and form confidence intervals. It can be used when there are missing data if the model is fitted by structural equation modeling. There are only a few limitations on some aspects of a model, and no inherent limitations on the number of predictors, the number of independent variables, or the number of moderators and mediators. The goal is to have a tool that allows researchers to focus on model fitting first and worry about estimating the effects later. The use of the model is illustrated using a few numerical examples, and the limitations of the package are discussed.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4862-4882"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41105763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-09-19DOI: 10.3758/s13428-023-02210-5
Tim Meyer, Arnold D Kim, Michael Spivey, Jeff Yoshimi
Mouse tracking is an important source of data in cognitive science. Most contemporary mouse tracking studies use binary-choice tasks and analyze the curvature or velocity of an individual mouse movement during an experimental trial as participants select from one of the two options. However, there are many types of mouse tracking data available beyond what is produced in a binary-choice task, including naturalistic data from web users. In order to utilize these data, cognitive scientists need tools that are robust to the lack of trial-by-trial structure in most normal computer tasks. We use singular value decomposition (SVD) and detrended fluctuation analysis (DFA) to analyze whole time series of unstructured mouse movement data. We also introduce a new technique for describing two-dimensional mouse traces as complex-valued time series, which allows SVD and DFA to be applied in a straightforward way without losing important spatial information. We find that there is useful information at the level of whole time series, and we use this information to predict performance in an online task. We also discuss how the implications of these results can advance the use of mouse tracking research in cognitive science.
{"title":"Mouse tracking performance: A new approach to analyzing continuous mouse tracking data.","authors":"Tim Meyer, Arnold D Kim, Michael Spivey, Jeff Yoshimi","doi":"10.3758/s13428-023-02210-5","DOIUrl":"10.3758/s13428-023-02210-5","url":null,"abstract":"<p><p>Mouse tracking is an important source of data in cognitive science. Most contemporary mouse tracking studies use binary-choice tasks and analyze the curvature or velocity of an individual mouse movement during an experimental trial as participants select from one of the two options. However, there are many types of mouse tracking data available beyond what is produced in a binary-choice task, including naturalistic data from web users. In order to utilize these data, cognitive scientists need tools that are robust to the lack of trial-by-trial structure in most normal computer tasks. We use singular value decomposition (SVD) and detrended fluctuation analysis (DFA) to analyze whole time series of unstructured mouse movement data. We also introduce a new technique for describing two-dimensional mouse traces as complex-valued time series, which allows SVD and DFA to be applied in a straightforward way without losing important spatial information. We find that there is useful information at the level of whole time series, and we use this information to predict performance in an online task. We also discuss how the implications of these results can advance the use of mouse tracking research in cognitive science.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4682-4694"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289036/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41105948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-08-07DOI: 10.3758/s13428-023-02192-4
Jongsoo Baek, Hae-Jeong Park
The speed-accuracy tradeoff (SAT) often makes psychophysical data difficult to interpret. Accordingly, the SAT experimental procedure and model were proposed for an integrated account of the speed and accuracy of responses. However, the extensive data collection for a SAT experiment has blocked its popularity. For a quick estimation of SAT function (SATf), we previously developed a Bayesian adaptive SAT method, including an online stimulus selection strategy. By simulations, the method was proved efficient with high accuracy and precision with minimal trials, adequate for practically applying a single condition task. However, it calls for extensions to more general designs with multiple conditions and should be revised to achieve improved estimation performance. It also demands real experimental validation with human participants. In the current study, we suggested an improved method to measure SATfs for multiple task conditions concurrently and to enhance robustness in general designs. The performance was evaluated with simulation studies and a psychophysical experiment using a flanker task. Simulation results revealed that the proposed method with the adaptive stimulus selection strategy efficiently estimated multiple SATfs and improved performance even for cases with an extreme parameter value. In the psychophysical experiment, SATfs estimated by minimal adaptive trials (1/8 of conventional trials) showed high agreement with those by conventional trials required for reliably estimating multiple SATfs. These results indicate that the Bayesian adaptive SAT method is reliable and efficient in estimating SATfs in most experimental settings and may apply to SATf estimation in general behavioral research designs.
速度与准确性的权衡(SAT)往往使心理物理数据难以解释。因此,为了综合考虑反应的速度和准确性,人们提出了 SAT 实验程序和模型。然而,SAT 实验需要收集大量数据,这阻碍了它的普及。为了快速估算 SAT 函数(SATf),我们之前开发了一种贝叶斯自适应 SAT 方法,包括在线刺激选择策略。通过模拟实验,该方法被证明是高效的,只需最少的试验就能获得较高的准确度和精确度,足以实际应用于单一条件任务。不过,该方法需要扩展到更多条件的一般设计中,并应进行修改以提高估计性能。此外,它还需要人类参与者的实际实验验证。在当前的研究中,我们提出了一种改进的方法,用于同时测量多种任务条件下的 SATfs,并增强一般设计的稳健性。我们通过模拟研究和侧翼任务心理物理实验对该方法的性能进行了评估。模拟结果表明,采用自适应刺激选择策略的拟议方法能有效估计多个 SATfs,即使在参数值极端化的情况下也能提高性能。在心理物理实验中,通过最小自适应试验(常规试验的 1/8)估算出的 SATfs 与可靠估算多个 SATfs 所需的常规试验估算出的 SATfs 具有很高的一致性。这些结果表明,贝叶斯自适应 SAT 方法在大多数实验环境中都能可靠、高效地估计 SATfs,并可用于一般行为研究设计中的 SATf 估计。
{"title":"Bayesian adaptive method for estimating speed-accuracy tradeoff functions of multiple task conditions.","authors":"Jongsoo Baek, Hae-Jeong Park","doi":"10.3758/s13428-023-02192-4","DOIUrl":"10.3758/s13428-023-02192-4","url":null,"abstract":"<p><p>The speed-accuracy tradeoff (SAT) often makes psychophysical data difficult to interpret. Accordingly, the SAT experimental procedure and model were proposed for an integrated account of the speed and accuracy of responses. However, the extensive data collection for a SAT experiment has blocked its popularity. For a quick estimation of SAT function (SATf), we previously developed a Bayesian adaptive SAT method, including an online stimulus selection strategy. By simulations, the method was proved efficient with high accuracy and precision with minimal trials, adequate for practically applying a single condition task. However, it calls for extensions to more general designs with multiple conditions and should be revised to achieve improved estimation performance. It also demands real experimental validation with human participants. In the current study, we suggested an improved method to measure SATfs for multiple task conditions concurrently and to enhance robustness in general designs. The performance was evaluated with simulation studies and a psychophysical experiment using a flanker task. Simulation results revealed that the proposed method with the adaptive stimulus selection strategy efficiently estimated multiple SATfs and improved performance even for cases with an extreme parameter value. In the psychophysical experiment, SATfs estimated by minimal adaptive trials (1/8 of conventional trials) showed high agreement with those by conventional trials required for reliably estimating multiple SATfs. These results indicate that the Bayesian adaptive SAT method is reliable and efficient in estimating SATfs in most experimental settings and may apply to SATf estimation in general behavioral research designs.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4403-4420"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9945318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-08-14DOI: 10.3758/s13428-023-02175-5
Dmitry V Zlenko, Vladimir M Olshanskiy, Andrey A Orlov, Alexander O Kasumyan, Eoin MacMahon, Xue Wei, Peter Moller
In some fish lineages, evolution has led to unique sensory adaptations that provide information which is not available to terrestrial animals. These sensory systems include, among others, electroreception, which together with the ability of fish to generate electric discharges plays a role in social communication and object location. Most studies on electric phenomena in aquatic animals are dedicated to selected groups of electric fishes that regularly generate electric signals (Mormyriformes, Gymnotiformes). There exist, however, several species (hitherto described as non-electric) which, though able to perceive electric signals, have now been found to also generate them. In this article, we introduce a tool that we have designed to investigate such electric activity. This required significant adaptations of the equipment used in fish with regular discharge generation. The necessary improvements were realized by using a multielectrode registration setup allowing simultaneous visualization and quantification of behavior and associated electric activity of fish, alone or in groups, with combined electro-video clips. Precise synchronization of locomotor and electric behaviors made it possible to determine the electrically active fish in a group, and also the location of the electrogenic structure inside the fish's body. Our simple registration procedure, together with data presentation, should attract a broad audience of scientists taking up the challenge of uncovering electric phenomena in aquatic animals currently treated as electrically inactive.
{"title":"Visualization of electric fields and associated behavior in fish and other aquatic animals.","authors":"Dmitry V Zlenko, Vladimir M Olshanskiy, Andrey A Orlov, Alexander O Kasumyan, Eoin MacMahon, Xue Wei, Peter Moller","doi":"10.3758/s13428-023-02175-5","DOIUrl":"10.3758/s13428-023-02175-5","url":null,"abstract":"<p><p>In some fish lineages, evolution has led to unique sensory adaptations that provide information which is not available to terrestrial animals. These sensory systems include, among others, electroreception, which together with the ability of fish to generate electric discharges plays a role in social communication and object location. Most studies on electric phenomena in aquatic animals are dedicated to selected groups of electric fishes that regularly generate electric signals (Mormyriformes, Gymnotiformes). There exist, however, several species (hitherto described as non-electric) which, though able to perceive electric signals, have now been found to also generate them. In this article, we introduce a tool that we have designed to investigate such electric activity. This required significant adaptations of the equipment used in fish with regular discharge generation. The necessary improvements were realized by using a multielectrode registration setup allowing simultaneous visualization and quantification of behavior and associated electric activity of fish, alone or in groups, with combined electro-video clips. Precise synchronization of locomotor and electric behaviors made it possible to determine the electrically active fish in a group, and also the location of the electrogenic structure inside the fish's body. Our simple registration procedure, together with data presentation, should attract a broad audience of scientists taking up the challenge of uncovering electric phenomena in aquatic animals currently treated as electrically inactive.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4255-4276"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10362768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-08-07DOI: 10.3758/s13428-023-02197-z
Benjamin O Rangel, Giacomo Novembre, Jan R Wessel
Inhibition is a key cognitive control mechanism humans use to enable goal-directed behavior. When rapidly exerted, inhibitory control has broad, nonselective motor effects, typically demonstrated using corticospinal excitability measurements (CSE) elicited by transcranial magnetic stimulation (TMS). For example, during rapid action-stopping, CSE is suppressed at both stopped and task-unrelated muscles. While such TMS-based CSE measurements have provided crucial insights into the fronto-basal ganglia circuitry underlying inhibitory control, they have several downsides. TMS is contraindicated in many populations (e.g., epilepsy or deep-brain stimulation patients), has limited temporal resolution, produces distracting auditory and haptic stimulation, is difficult to combine with other imaging methods, and necessitates expensive, immobile equipment. Here, we attempted to measure the nonselective motor effects of inhibitory control using a method unaffected by these shortcomings. Thirty male and female human participants exerted isometric force on a high-precision handheld force transducer while performing a foot-response stop-signal task. Indeed, when foot movements were successfully stopped, force output at the task-irrelevant hand was suppressed as well. Moreover, this nonselective reduction of isometric force was highly correlated with stop-signal performance and showed frequency dynamics similar to established inhibitory signatures typically found in neural and muscle recordings. Together, these findings demonstrate that isometric force recordings can reliably capture the nonselective effects of motor inhibition, opening the door to many applications that are hard or impossible to realize with TMS.
{"title":"Measuring the nonselective effects of motor inhibition using isometric force recordings.","authors":"Benjamin O Rangel, Giacomo Novembre, Jan R Wessel","doi":"10.3758/s13428-023-02197-z","DOIUrl":"10.3758/s13428-023-02197-z","url":null,"abstract":"<p><p>Inhibition is a key cognitive control mechanism humans use to enable goal-directed behavior. When rapidly exerted, inhibitory control has broad, nonselective motor effects, typically demonstrated using corticospinal excitability measurements (CSE) elicited by transcranial magnetic stimulation (TMS). For example, during rapid action-stopping, CSE is suppressed at both stopped and task-unrelated muscles. While such TMS-based CSE measurements have provided crucial insights into the fronto-basal ganglia circuitry underlying inhibitory control, they have several downsides. TMS is contraindicated in many populations (e.g., epilepsy or deep-brain stimulation patients), has limited temporal resolution, produces distracting auditory and haptic stimulation, is difficult to combine with other imaging methods, and necessitates expensive, immobile equipment. Here, we attempted to measure the nonselective motor effects of inhibitory control using a method unaffected by these shortcomings. Thirty male and female human participants exerted isometric force on a high-precision handheld force transducer while performing a foot-response stop-signal task. Indeed, when foot movements were successfully stopped, force output at the task-irrelevant hand was suppressed as well. Moreover, this nonselective reduction of isometric force was highly correlated with stop-signal performance and showed frequency dynamics similar to established inhibitory signatures typically found in neural and muscle recordings. Together, these findings demonstrate that isometric force recordings can reliably capture the nonselective effects of motor inhibition, opening the door to many applications that are hard or impossible to realize with TMS.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4486-4503"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9951630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-09-11DOI: 10.3758/s13428-023-02223-0
Yi Mou, Huilan Xiao, Bo Zhang, Yingying Jiang, Xuqing Wang
Nonverbal numerical ability supports individuals' numerical information processing in everyday life and is also correlated with their learning of mathematics. This ability is typically measured with an approximate number comparison paradigm, in which participants are presented with two sets of objects and instructed to choose the numerically larger set. This paradigm has multiple task variants, where the two sets are presented in different ways (e.g., two sets are presented either simultaneously or sequentially, or two sets are presented either intermixed or separately). Despite the fact that different task variants have often been used interchangeably, it remains unclear whether these variants measure the same aspects of nonverbal numerical ability. Using a latent variable modeling approach with 270 participants (Mage = 20.75 years, SDage = 2.03, 94 males), this study examined the degree to which three commonly used task variants tapped into the same construct. The results showed that a bi-factor model corresponding to the hypothesis that task variants had both commonalities and uniqueness was a better fit for the data than a single-factor model, corresponding to the hypothesis that task variants were construct equivalent. These findings suggested that task variants of approximate number comparison did not measure the same construct and cannot be used interchangeably. This study also quantified the extent to which general cognitive abilities were involved in both common and unique parts of these task variants.
{"title":"Are they equivalent? An examination of task variants of approximate number comparison.","authors":"Yi Mou, Huilan Xiao, Bo Zhang, Yingying Jiang, Xuqing Wang","doi":"10.3758/s13428-023-02223-0","DOIUrl":"10.3758/s13428-023-02223-0","url":null,"abstract":"<p><p>Nonverbal numerical ability supports individuals' numerical information processing in everyday life and is also correlated with their learning of mathematics. This ability is typically measured with an approximate number comparison paradigm, in which participants are presented with two sets of objects and instructed to choose the numerically larger set. This paradigm has multiple task variants, where the two sets are presented in different ways (e.g., two sets are presented either simultaneously or sequentially, or two sets are presented either intermixed or separately). Despite the fact that different task variants have often been used interchangeably, it remains unclear whether these variants measure the same aspects of nonverbal numerical ability. Using a latent variable modeling approach with 270 participants (M<sub>age</sub> = 20.75 years, SD<sub>age</sub> = 2.03, 94 males), this study examined the degree to which three commonly used task variants tapped into the same construct. The results showed that a bi-factor model corresponding to the hypothesis that task variants had both commonalities and uniqueness was a better fit for the data than a single-factor model, corresponding to the hypothesis that task variants were construct equivalent. These findings suggested that task variants of approximate number comparison did not measure the same construct and cannot be used interchangeably. This study also quantified the extent to which general cognitive abilities were involved in both common and unique parts of these task variants.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4850-4861"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10216098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}