Pub Date : 2025-12-08DOI: 10.3758/s13428-025-02898-7
Christopher T Kello, Polyphony Bruna, Kanly Thao
Neural network modeling has played a central role in psycholinguistic studies of lexical processing, but the recent advent of large language models (LLMs) offers a different approach that may yield new insights into the mental lexicon. Four LLMs were prompted across three experiments to test how they generate psycholinguistic ratings of words in comparison with humans. LLM ratings, averaged across varying list contexts, were found to be highly correlated with human ratings, and differences in correlation strengths were partly explained by differences in rating ambiguity. LLM context manipulations strengthened correlations with human ratings through better calibration, and variability in LLM ratings was correlated with human inter-rater variability. Additional results from testing LLM generation of word naming latencies showed functional deviations from factors that underlie human word naming, indicating that lexical function assembly in LLMs is currently limited by patterns of co-occurrence in textual data. Patterns at finer-grained timescales are needed in the training data to model online lexical processes. We conclude that LLMs used context to guide the assembly of generalized lexical functions, rather than recalling ratings and latencies from training data.
{"title":"Contextual assembly of lexical functions in large language models.","authors":"Christopher T Kello, Polyphony Bruna, Kanly Thao","doi":"10.3758/s13428-025-02898-7","DOIUrl":"10.3758/s13428-025-02898-7","url":null,"abstract":"<p><p>Neural network modeling has played a central role in psycholinguistic studies of lexical processing, but the recent advent of large language models (LLMs) offers a different approach that may yield new insights into the mental lexicon. Four LLMs were prompted across three experiments to test how they generate psycholinguistic ratings of words in comparison with humans. LLM ratings, averaged across varying list contexts, were found to be highly correlated with human ratings, and differences in correlation strengths were partly explained by differences in rating ambiguity. LLM context manipulations strengthened correlations with human ratings through better calibration, and variability in LLM ratings was correlated with human inter-rater variability. Additional results from testing LLM generation of word naming latencies showed functional deviations from factors that underlie human word naming, indicating that lexical function assembly in LLMs is currently limited by patterns of co-occurrence in textual data. Patterns at finer-grained timescales are needed in the training data to model online lexical processes. We conclude that LLMs used context to guide the assembly of generalized lexical functions, rather than recalling ratings and latencies from training data.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"19"},"PeriodicalIF":3.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12686107/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.3758/s13428-025-02903-z
Johan Lundin Kleberg, Astrid E Z Hallman, Rebecka Astenvald, Ann Nordgren, Terje Falck-Ytter, Ronald van den Berg
Eye tracking has become an increasingly important tool in cognitive and developmental research, providing insights into processes that are difficult to measure otherwise. The majority of eye-tracking studies rely on accurate identification of fixations and saccades in raw data using event classification algorithms (sometimes called fixation filters). Subsequently, it is common to analyze whether fixations or saccades fall into specific areas of interest (AOI). The choice of algorithms can significantly influence study outcomes, especially in special populations such as young children or individuals with neurodevelopmental conditions, where data quality is often compromised by factors such as signal loss, poor calibration, or movement artifacts. It is therefore crucial to examine how available fixation classification algorithms affect the data set at hand as part of the eye-tracking analysis. Here, we introduce the kollaR package, an open-source R library for performing the main steps of an eye-tracking analysis from event classification to AOI-based analyses and visualizations of individual or group-level data for publications. The kollaR package was specifically designed to facilitate the selection and comparison of different event classification algorithms through visualizations. In a validation analysis, we show that results from fixation classification in kollaR are consistent with those from other software implementations of the same algorithms. We demonstrate the use of kollaR with real data from typically developing individuals and individuals with neurodevelopmental conditions, and illustrate how potential threats to validity can be identified in both high- and low-quality data.
{"title":"Introducing the kollaR package: A user-friendly open-access solution for eye-tracking analysis and visualization.","authors":"Johan Lundin Kleberg, Astrid E Z Hallman, Rebecka Astenvald, Ann Nordgren, Terje Falck-Ytter, Ronald van den Berg","doi":"10.3758/s13428-025-02903-z","DOIUrl":"10.3758/s13428-025-02903-z","url":null,"abstract":"<p><p>Eye tracking has become an increasingly important tool in cognitive and developmental research, providing insights into processes that are difficult to measure otherwise. The majority of eye-tracking studies rely on accurate identification of fixations and saccades in raw data using event classification algorithms (sometimes called fixation filters). Subsequently, it is common to analyze whether fixations or saccades fall into specific areas of interest (AOI). The choice of algorithms can significantly influence study outcomes, especially in special populations such as young children or individuals with neurodevelopmental conditions, where data quality is often compromised by factors such as signal loss, poor calibration, or movement artifacts. It is therefore crucial to examine how available fixation classification algorithms affect the data set at hand as part of the eye-tracking analysis. Here, we introduce the kollaR package, an open-source R library for performing the main steps of an eye-tracking analysis from event classification to AOI-based analyses and visualizations of individual or group-level data for publications. The kollaR package was specifically designed to facilitate the selection and comparison of different event classification algorithms through visualizations. In a validation analysis, we show that results from fixation classification in kollaR are consistent with those from other software implementations of the same algorithms. We demonstrate the use of kollaR with real data from typically developing individuals and individuals with neurodevelopmental conditions, and illustrate how potential threats to validity can be identified in both high- and low-quality data.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"20"},"PeriodicalIF":3.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12685970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.3758/s13428-025-02873-2
Jamie Cummins, Ian Hussey
Implicit measures are used extensively in psychological science. One fundamental goal of these measures is to provide information diagnostic of an individual's attitudes or beliefs. After 25 years of research, this goal has not been achieved. We argue that this is because psychologists have not yet even quantified the individual-level precision of implicit measures, much less calibrated them to it. In this paper, we examine the individual-level precision of six different implicit measures across three different attitude domains (race, politics, and self-esteem) using a very large open dataset. Despite some variation, we find that there is substantial room for improvement for the precision of implicit measures as measures of individual attitudes. We recommend that researchers who wish to make theoretical inferences about individuals directly quantify individual-level precision to calibrate their tasks appropriately, both in the context of implicit measures and with tasks in psychological science more broadly.
{"title":"The individual-level precision of implicit measures.","authors":"Jamie Cummins, Ian Hussey","doi":"10.3758/s13428-025-02873-2","DOIUrl":"10.3758/s13428-025-02873-2","url":null,"abstract":"<p><p>Implicit measures are used extensively in psychological science. One fundamental goal of these measures is to provide information diagnostic of an individual's attitudes or beliefs. After 25 years of research, this goal has not been achieved. We argue that this is because psychologists have not yet even quantified the individual-level precision of implicit measures, much less calibrated them to it. In this paper, we examine the individual-level precision of six different implicit measures across three different attitude domains (race, politics, and self-esteem) using a very large open dataset. Despite some variation, we find that there is substantial room for improvement for the precision of implicit measures as measures of individual attitudes. We recommend that researchers who wish to make theoretical inferences about individuals directly quantify individual-level precision to calibrate their tasks appropriately, both in the context of implicit measures and with tasks in psychological science more broadly.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"21"},"PeriodicalIF":3.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12686091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.3758/s13428-025-02896-9
Anahí Gutkin, Daniel W Heck
Both parametric and non-parametric extensions of the multinomial processing tree (MPT) models have been proposed for jointly modeling discrete and continuous variables. Since the two approaches have not yet been compared systematically, we assess their power and robustness in three simulation studies focusing on the weapon identification task. In this context, two statistically equivalent MPT models have been proposed, namely, the preemptive-conflict-resolution model (PCRM) and the default-interventionist model (DIM), which differ only in their assumptions regarding the order of latent processes (i.e., response times, RTs). The first simulation evaluates the calibration and statistical power of the nonstandard goodness-of-fit test for the parametric approach (i.e., the Dzhaparidze-Nikulin statistic), as well as the ability of different distributional assumptions to fit simulated RT data. The second simulation compares nested models to study the power for testing hypotheses about RTs within each model. The third simulation focuses on model-recovery performance for the two non-nested models. In all three simulations, we manipulated the size and nature of discrepancies (location/scale or shape) between latent RT distributions, sample size, and parametric assumptions. Results show that the parametric approach has higher statistical power but is also sensitive to misspecifications of distributional assumptions. In contrast, the non-parametric approach is more robust but less powerful, especially with small samples. We provide recommendations on when to use each approach and highlight the importance of properly specifying and selecting extended MPT models.
{"title":"Extensions of multinomial processing tree models for continuous variables: A simulation study comparing parametric and non-parametric approaches.","authors":"Anahí Gutkin, Daniel W Heck","doi":"10.3758/s13428-025-02896-9","DOIUrl":"10.3758/s13428-025-02896-9","url":null,"abstract":"<p><p>Both parametric and non-parametric extensions of the multinomial processing tree (MPT) models have been proposed for jointly modeling discrete and continuous variables. Since the two approaches have not yet been compared systematically, we assess their power and robustness in three simulation studies focusing on the weapon identification task. In this context, two statistically equivalent MPT models have been proposed, namely, the preemptive-conflict-resolution model (PCRM) and the default-interventionist model (DIM), which differ only in their assumptions regarding the order of latent processes (i.e., response times, RTs). The first simulation evaluates the calibration and statistical power of the nonstandard goodness-of-fit test for the parametric approach (i.e., the Dzhaparidze-Nikulin statistic), as well as the ability of different distributional assumptions to fit simulated RT data. The second simulation compares nested models to study the power for testing hypotheses about RTs within each model. The third simulation focuses on model-recovery performance for the two non-nested models. In all three simulations, we manipulated the size and nature of discrepancies (location/scale or shape) between latent RT distributions, sample size, and parametric assumptions. Results show that the parametric approach has higher statistical power but is also sensitive to misspecifications of distributional assumptions. In contrast, the non-parametric approach is more robust but less powerful, especially with small samples. We provide recommendations on when to use each approach and highlight the importance of properly specifying and selecting extended MPT models.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"22"},"PeriodicalIF":3.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12686110/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.3758/s13428-025-02864-3
Matt D Anderson, Emily A Cooper, Jorge Otero-Millan
In gaze-contingent rendering, the visual stimulus rendered on a display changes based on where the observer is looking. This technique allows researchers to achieve dynamic control over stimulus placement on the retina in the presence of eye movements and is often used to investigate how sensory processing and perception vary across the visual field. Precise stimulus placement using gaze-contingent rendering depends on minimizing the temporal latency between a change in the observer's gaze position, measured using an eye tracker, and the corresponding change to the stimulus. This latency, however, can be challenging to measure reliably. Here, we present a simple method for measuring system latency that requires no additional hardware beyond the eye tracker and display, which are already part of the gaze-contingent system. Two small circles are rendered on the display to simulate the appearance of two pupils. The eye tracker is pointed towards the display to record both pupils simultaneously. One pupil is drawn based on a pre-determined trajectory, for example, moving up and down at a constant speed. The second pupil is "gaze-contingent": it is drawn based on the measured position of the first pupil. The time-lag at which the position of the second pupil matches the first pupil gives the closed-loop latency of the entire system. To validate this method, we added artificial rendering delays to our system and produced measured latencies that precisely corresponded to predictions, given the refresh rate of the display. This method provides a simple, low-cost way of precisely quantifying gaze-contingent rendering latencies, with no additional hardware required.
{"title":"A method for measuring closed-loop latency in gaze-contingent rendering without extra equipment.","authors":"Matt D Anderson, Emily A Cooper, Jorge Otero-Millan","doi":"10.3758/s13428-025-02864-3","DOIUrl":"10.3758/s13428-025-02864-3","url":null,"abstract":"<p><p>In gaze-contingent rendering, the visual stimulus rendered on a display changes based on where the observer is looking. This technique allows researchers to achieve dynamic control over stimulus placement on the retina in the presence of eye movements and is often used to investigate how sensory processing and perception vary across the visual field. Precise stimulus placement using gaze-contingent rendering depends on minimizing the temporal latency between a change in the observer's gaze position, measured using an eye tracker, and the corresponding change to the stimulus. This latency, however, can be challenging to measure reliably. Here, we present a simple method for measuring system latency that requires no additional hardware beyond the eye tracker and display, which are already part of the gaze-contingent system. Two small circles are rendered on the display to simulate the appearance of two pupils. The eye tracker is pointed towards the display to record both pupils simultaneously. One pupil is drawn based on a pre-determined trajectory, for example, moving up and down at a constant speed. The second pupil is \"gaze-contingent\": it is drawn based on the measured position of the first pupil. The time-lag at which the position of the second pupil matches the first pupil gives the closed-loop latency of the entire system. To validate this method, we added artificial rendering delays to our system and produced measured latencies that precisely corresponded to predictions, given the refresh rate of the display. This method provides a simple, low-cost way of precisely quantifying gaze-contingent rendering latencies, with no additional hardware required.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"16"},"PeriodicalIF":3.9,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12675766/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145666880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.3758/s13428-025-02880-3
Mohammadhossein Salari, Diederick C Niehorster, Marcus Nyström, Roman Bednarik
Changes in pupil size can lead to apparent gaze shifts in data recorded with video-based eye trackers in the absence of physical eye rotation. This is known as the pupil-size artifact (PSA). While the PSA is widely reported in desktop eye trackers, it is unknown whether and to what extent it occurs in head-mounted eye trackers. In this paper, we examined the effects of pupil size variations on eye-tracking data quality in four head-mounted eye trackers: the Pupil Core, the Pupil Neon, the SMI ETG 2w, and the Tobii Pro Glasses 2, in addition to a widely used desktop eye tracker, the SR Research EyeLink 1000 Plus. Participants viewed a central target on a monitor while we systematically varied the screen brightness to induce controlled pupil size changes. All head-mounted eye trackers exhibited PSA, with apparent gaze shifts ranging from 0.94 for the Pupil Neon to 3.46 for the Pupil Core. Except for the Pupil Neon, all eye trackers exhibited a significant change in accuracy due to pupil size variations. Precision measures showed device-specific effects of pupil size changes, with some eye trackers performing better in the bright condition and others in the dark condition. These findings demonstrated that, just like desktop eye trackers, head-mounted video-based eye trackers exhibited PSA.
根据基于视频的眼动仪记录的数据,瞳孔大小的变化会导致眼睛在没有转动的情况下发生明显的目光转移。这被称为瞳孔大小伪影(PSA)。虽然PSA在台式眼动仪中被广泛报道,但它是否以及在多大程度上发生在头戴式眼动仪中尚不清楚。在本文中,我们研究了四种头戴式眼动仪瞳孔大小变化对眼动追踪数据质量的影响:瞳孔核心、瞳孔霓虹灯、SMI ETG 2w和Tobii Pro眼镜2,以及广泛使用的桌面眼动仪SR Research EyeLink 1000 Plus。参与者观看监视器上的中心目标,而我们系统地改变屏幕亮度以诱导可控的瞳孔大小变化。所有头戴式眼动仪都显示出PSA,其注视范围从瞳孔虹膜的0.94°到瞳孔核心的3.46°不等。除了瞳孔霓虹灯,由于瞳孔大小的变化,所有眼动仪的准确性都有显著变化。精确测量显示瞳孔大小变化对设备的特定影响,一些眼动仪在明亮条件下表现更好,而另一些在黑暗条件下表现更好。这些发现表明,就像桌面眼动仪一样,头戴式视频眼动仪也显示出PSA。
{"title":"The effect of pupil size on data quality in head-mounted eye trackers.","authors":"Mohammadhossein Salari, Diederick C Niehorster, Marcus Nyström, Roman Bednarik","doi":"10.3758/s13428-025-02880-3","DOIUrl":"10.3758/s13428-025-02880-3","url":null,"abstract":"<p><p>Changes in pupil size can lead to apparent gaze shifts in data recorded with video-based eye trackers in the absence of physical eye rotation. This is known as the pupil-size artifact (PSA). While the PSA is widely reported in desktop eye trackers, it is unknown whether and to what extent it occurs in head-mounted eye trackers. In this paper, we examined the effects of pupil size variations on eye-tracking data quality in four head-mounted eye trackers: the Pupil Core, the Pupil Neon, the SMI ETG 2w, and the Tobii Pro Glasses 2, in addition to a widely used desktop eye tracker, the SR Research EyeLink 1000 Plus. Participants viewed a central target on a monitor while we systematically varied the screen brightness to induce controlled pupil size changes. All head-mounted eye trackers exhibited PSA, with apparent gaze shifts ranging from 0.94 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> for the Pupil Neon to 3.46 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> for the Pupil Core. Except for the Pupil Neon, all eye trackers exhibited a significant change in accuracy due to pupil size variations. Precision measures showed device-specific effects of pupil size changes, with some eye trackers performing better in the bright condition and others in the dark condition. These findings demonstrated that, just like desktop eye trackers, head-mounted video-based eye trackers exhibited PSA.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"17"},"PeriodicalIF":3.9,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12675653/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145666858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.3758/s13428-025-02900-2
Piotr Litwin, Katarzyna Kubik, Matthew R Longo
In the present study, we developed a novel self-report measurement method for the rubber hand illusion (RHI) strength based on inverse multidimensional scaling (MDS). In the preregistered study consisting of two experiments, participants experienced the RHI in synchronous and asynchronous conditions (Experiment 1) as well as the RHI and the arm immobilization imaginative suggestion (Experiment 2). In each condition, participants repeatedly arranged items related to distinct bodily-related experiences (including RHI or suggestion) in accordance with the perceived similarity between them. Proximity data obtained from the arrangements were represented as distances in the multidimensional bodily space. To measure RHI strength, we focused on distances between items representing experimental conditions and two baseline items representing cases of no ownership over an external object and normal bodily feelings. We found that the distance between the rubber hand and an external object was significantly larger in the synchronous than the asynchronous condition, and larger than the distance between the immobilized arm and the normal body, demonstrating stronger shifts in ownership for synchronous RHI. In general, the RHI was associated with moderate ownership and low perceived stimulation, and it clustered with experiences related to a high degree of ownership. MDS-based solutions for the bodily space were consistent within participants and across different experimental conditions. We believe that this method can complement traditional questionnaire-based measurement, offering additional opportunities for a comprehensive self-assessment of RHI strength.
{"title":"Novel method for rubber hand illusion strength measurement based on inverse multidimensional scaling.","authors":"Piotr Litwin, Katarzyna Kubik, Matthew R Longo","doi":"10.3758/s13428-025-02900-2","DOIUrl":"10.3758/s13428-025-02900-2","url":null,"abstract":"<p><p>In the present study, we developed a novel self-report measurement method for the rubber hand illusion (RHI) strength based on inverse multidimensional scaling (MDS). In the preregistered study consisting of two experiments, participants experienced the RHI in synchronous and asynchronous conditions (Experiment 1) as well as the RHI and the arm immobilization imaginative suggestion (Experiment 2). In each condition, participants repeatedly arranged items related to distinct bodily-related experiences (including RHI or suggestion) in accordance with the perceived similarity between them. Proximity data obtained from the arrangements were represented as distances in the multidimensional bodily space. To measure RHI strength, we focused on distances between items representing experimental conditions and two baseline items representing cases of no ownership over an external object and normal bodily feelings. We found that the distance between the rubber hand and an external object was significantly larger in the synchronous than the asynchronous condition, and larger than the distance between the immobilized arm and the normal body, demonstrating stronger shifts in ownership for synchronous RHI. In general, the RHI was associated with moderate ownership and low perceived stimulation, and it clustered with experiences related to a high degree of ownership. MDS-based solutions for the bodily space were consistent within participants and across different experimental conditions. We believe that this method can complement traditional questionnaire-based measurement, offering additional opportunities for a comprehensive self-assessment of RHI strength.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"15"},"PeriodicalIF":3.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12672668/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.3758/s13428-025-02891-0
Thomas W Nugent, Andrew J Zele
Lighting is routinely specified only by its impact on the three cone photoreceptors via the correlated color temperature (CCT), ignoring the visual and non-visual contributions of the melanopsin photoreceptors. Disentangling the behavioral effects of the CCT from those of the melanopsin excitation is complex but necessary to understand melanopsin's effects and to inform the design of new lighting spectra for the built environment. Melanopsin photoreception is important for driving many visual and non-visual functions in humans, including circadian rhythms, mood, attention, and arousal. Here, we introduce a methodology using a widely available LED source (Philips Hue Play, Signify N.V.) to decouple the effects of melanopsin from those of cone photoreceptors. We present a computational algorithm for producing two ambient illuminations with different melanopsin and rhodopsin activation levels, whilst maintaining the same cone excitations, CCT and visual appearance (i.e., the two lighting conditions are cone metamers); this simple and inexpensive method removes the major confounding factor present in approaches that alter the melanopsin excitation of a light by exchanging the wavelength, color, or CCT. The method may find applications in behavioral experiments, including for clinical trials.
{"title":"A method for setting the melanopsin and rhodopsin content in commercial LED sources to investigate the effects of ambient light on behavior.","authors":"Thomas W Nugent, Andrew J Zele","doi":"10.3758/s13428-025-02891-0","DOIUrl":"https://doi.org/10.3758/s13428-025-02891-0","url":null,"abstract":"<p><p>Lighting is routinely specified only by its impact on the three cone photoreceptors via the correlated color temperature (CCT), ignoring the visual and non-visual contributions of the melanopsin photoreceptors. Disentangling the behavioral effects of the CCT from those of the melanopsin excitation is complex but necessary to understand melanopsin's effects and to inform the design of new lighting spectra for the built environment. Melanopsin photoreception is important for driving many visual and non-visual functions in humans, including circadian rhythms, mood, attention, and arousal. Here, we introduce a methodology using a widely available LED source (Philips Hue Play, Signify N.V.) to decouple the effects of melanopsin from those of cone photoreceptors. We present a computational algorithm for producing two ambient illuminations with different melanopsin and rhodopsin activation levels, whilst maintaining the same cone excitations, CCT and visual appearance (i.e., the two lighting conditions are cone metamers); this simple and inexpensive method removes the major confounding factor present in approaches that alter the melanopsin excitation of a light by exchanging the wavelength, color, or CCT. The method may find applications in behavioral experiments, including for clinical trials.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"14"},"PeriodicalIF":3.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.3758/s13428-025-02872-3
Eden Elbaz, Itay Yaron, Liad Mudrik
A major challenge in studying unconscious processing is to effectively suppress the critical stimulus while allowing maximal signal strength for adequate sensitivity to detect an effect, if it exists. A possible way to do this is to calibrate stimulus strength. While calibrating stimulus strength is common in psychophysics, current calibration methods are not designed to find the maximal intensity in which the stimulus can still be rendered unconscious (i.e., find the upper subliminal threshold for each participant). Here, we demonstrate how calibration can be utilized to estimate, for each observer, this targeted threshold. We present a novel calibration procedure: the Subliminal Threshold Estimation Procedure (STEP), specifically designed for estimating the upper subliminal threshold for each individual. Using simulations, we showed that STEP outperforms existing calibration methods, which yielded strikingly low accuracy. We then further validated STEP using three empirical experiments. Together, these results establish STEP as highly beneficial for the study of unconscious processing.
{"title":"The Subliminal Threshold Estimation Procedure (STEP): A calibration method tailored for estimating subliminal thresholds.","authors":"Eden Elbaz, Itay Yaron, Liad Mudrik","doi":"10.3758/s13428-025-02872-3","DOIUrl":"10.3758/s13428-025-02872-3","url":null,"abstract":"<p><p>A major challenge in studying unconscious processing is to effectively suppress the critical stimulus while allowing maximal signal strength for adequate sensitivity to detect an effect, if it exists. A possible way to do this is to calibrate stimulus strength. While calibrating stimulus strength is common in psychophysics, current calibration methods are not designed to find the maximal intensity in which the stimulus can still be rendered unconscious (i.e., find the upper subliminal threshold for each participant). Here, we demonstrate how calibration can be utilized to estimate, for each observer, this targeted threshold. We present a novel calibration procedure: the Subliminal Threshold Estimation Procedure (STEP), specifically designed for estimating the upper subliminal threshold for each individual. Using simulations, we showed that STEP outperforms existing calibration methods, which yielded strikingly low accuracy. We then further validated STEP using three empirical experiments. Together, these results establish STEP as highly beneficial for the study of unconscious processing.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"13"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12669343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145653366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.3758/s13428-025-02870-5
S A Bögemann, F Krause, A van Kraaij, M A Marciniak, J M van Leeuwen, J Weermeijer, J Mituniewicz, L M C Puhlmann, M Zerban, Z C Reppmann, D Kobylińska, K S L Yuen, B Kleim, H Walter, I Myin-Germeys, R Kalisch, I M Veer, K Roelofs, E J Hermans
Stress-related disorders present a significant global burden, highlighting the need for effective, preventive measures. Mobile just-in-time adaptive interventions (JITAI) can be applied in real time and context-specifically, precisely when individuals need them most. Yet, they are rarely applied in stress research. This study introduces a novel approach by performing real-time analysis of both psychological and physiological data to trigger interventions during moments of high stress. We evaluated the feasibility of this JITAI algorithm, which integrates ecological momentary assessments (EMA) and ecological physiological assessments (EPA) to generate a stress score that triggers interventions in real time by relating the score to a personalized stress threshold. The feasibility of the technical implementation, participant adherence, and user experience were assessed within a multicenter study with 215 participants conducted across five research sites. The JITAI algorithm successfully processed EMA and EPA data to trigger real-time interventions. A total of 68% (standard deviation [SD] = 29%) of EMA beeps contained extracted EPA features, demonstrating technical feasibility. The algorithm triggered 1.61 (SD = 1.26) interventions per day, with 43% (SD = 27%) of EMA beeps per week leading to triggered interventions. Compliance rates of 43% (SD = 22%) for EMA and 43% (SD = 30%) for the JITAI were achieved, with feedback indicating areas for improvement, particularly for daily-life integration. Our findings provide preliminary support for the feasibility of the developed JITAI algorithm, demonstrating effective data processing and intervention triggering in real time, while also highlighting areas for improvement. Future research should focus on minimizing participant burden, including the intensity of EMA protocols, to improve participant adherence and acceptability while maintaining the benefits of real-time intervention delivery.
{"title":"Triggering just-in-time adaptive interventions based on real-time detection of daily-life stress: Methodological development and longitudinal multicenter evaluation.","authors":"S A Bögemann, F Krause, A van Kraaij, M A Marciniak, J M van Leeuwen, J Weermeijer, J Mituniewicz, L M C Puhlmann, M Zerban, Z C Reppmann, D Kobylińska, K S L Yuen, B Kleim, H Walter, I Myin-Germeys, R Kalisch, I M Veer, K Roelofs, E J Hermans","doi":"10.3758/s13428-025-02870-5","DOIUrl":"10.3758/s13428-025-02870-5","url":null,"abstract":"<p><p>Stress-related disorders present a significant global burden, highlighting the need for effective, preventive measures. Mobile just-in-time adaptive interventions (JITAI) can be applied in real time and context-specifically, precisely when individuals need them most. Yet, they are rarely applied in stress research. This study introduces a novel approach by performing real-time analysis of both psychological and physiological data to trigger interventions during moments of high stress. We evaluated the feasibility of this JITAI algorithm, which integrates ecological momentary assessments (EMA) and ecological physiological assessments (EPA) to generate a stress score that triggers interventions in real time by relating the score to a personalized stress threshold. The feasibility of the technical implementation, participant adherence, and user experience were assessed within a multicenter study with 215 participants conducted across five research sites. The JITAI algorithm successfully processed EMA and EPA data to trigger real-time interventions. A total of 68% (standard deviation [SD] = 29%) of EMA beeps contained extracted EPA features, demonstrating technical feasibility. The algorithm triggered 1.61 (SD = 1.26) interventions per day, with 43% (SD = 27%) of EMA beeps per week leading to triggered interventions. Compliance rates of 43% (SD = 22%) for EMA and 43% (SD = 30%) for the JITAI were achieved, with feedback indicating areas for improvement, particularly for daily-life integration. Our findings provide preliminary support for the feasibility of the developed JITAI algorithm, demonstrating effective data processing and intervention triggering in real time, while also highlighting areas for improvement. Future research should focus on minimizing participant burden, including the intensity of EMA protocols, to improve participant adherence and acceptability while maintaining the benefits of real-time intervention delivery.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"12"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145653283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}