Pub Date : 2024-08-01Epub Date: 2023-09-29DOI: 10.3758/s13428-023-02233-y
Junyeong Yang, Jiwon Kim, Minjung Kim
In the actor-partner interdependence model (APIM), various dyadic patterns between an actor and partner can be examined. One widely used approach is the parameter k method, which tests whether the ratio of the partner effect to the actor effect (p/a) is significantly different from pattern values such as -1 (contrast), 0 (actor-only or partner-only), and 1 (couple). Although using a phantom variable was a useful method for estimating the k ratio, it is no longer necessary due to the availability of statistical packages that allow for a direct estimation of the k ratio without the inclusion of the phantom variable. Moreover, it is possible to examine the patterns by testing new variables defined in different forms from the k or using the χ2 difference test. To date, no previous studies have evaluated and compared the various approaches for detecting the dyadic patterns in APIM. This study aims to assess and compare the performance of four different methods for detecting dyadic patterns: (1) phantom variable approach, (2) direct estimation of the parameter k, (3) new-variable approach, and (4) χ2 difference test. The first two methods frequently included multiple pattern values in there confidence interval. Furthermore, the phantom variable approach was prone to convergence issues. The other two alternatives performed better in detecting the dyadic patterns without convergence problems. Given the findings of the study, we suggest a novel procedure for examining dyadic patterns in APIM.
{"title":"A comparison of the methods for detecting dyadic patterns in the actor-partner interdependence model.","authors":"Junyeong Yang, Jiwon Kim, Minjung Kim","doi":"10.3758/s13428-023-02233-y","DOIUrl":"10.3758/s13428-023-02233-y","url":null,"abstract":"<p><p>In the actor-partner interdependence model (APIM), various dyadic patterns between an actor and partner can be examined. One widely used approach is the parameter k method, which tests whether the ratio of the partner effect to the actor effect (p/a) is significantly different from pattern values such as -1 (contrast), 0 (actor-only or partner-only), and 1 (couple). Although using a phantom variable was a useful method for estimating the k ratio, it is no longer necessary due to the availability of statistical packages that allow for a direct estimation of the k ratio without the inclusion of the phantom variable. Moreover, it is possible to examine the patterns by testing new variables defined in different forms from the k or using the χ<sup>2</sup> difference test. To date, no previous studies have evaluated and compared the various approaches for detecting the dyadic patterns in APIM. This study aims to assess and compare the performance of four different methods for detecting dyadic patterns: (1) phantom variable approach, (2) direct estimation of the parameter k, (3) new-variable approach, and (4) χ<sup>2</sup> difference test. The first two methods frequently included multiple pattern values in there confidence interval. Furthermore, the phantom variable approach was prone to convergence issues. The other two alternatives performed better in detecting the dyadic patterns without convergence problems. Given the findings of the study, we suggest a novel procedure for examining dyadic patterns in APIM.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4946-4957"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41096130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-10-04DOI: 10.3758/s13428-023-02241-y
Jack Brookes, Samson Hall, Sascha Frühholz, Dominik R Bach
All animals have to respond to immediate threats in order to survive. In non-human animals, a diversity of sophisticated behaviours has been observed, but research in humans is hampered by ethical considerations. Here, we present a novel immersive VR toolkit for the Unity engine that allows assessing threat-related behaviour in single, semi-interactive, and semi-realistic threat encounters. The toolkit contains a suite of fully modelled naturalistic environments, interactive objects, animated threats, and scripted systems. These are arranged together by the researcher as a means of creating an experimental manipulation, to form a series of independent "episodes" in immersive VR. Several specifically designed tools aid the design of these episodes, including a system to allow for pre-sequencing the movement plans of animal threats. Episodes can be built with the assets included in the toolkit, but also easily extended with custom scripts, threats, and environments if required. During the experiments, the software stores behavioural, movement, and eye tracking data. With this software, we aim to facilitate the use of immersive VR in human threat avoidance research and thus to close a gap in the understanding of human behaviour under threat.
{"title":"Immersive VR for investigating threat avoidance: The VRthreat toolkit for Unity.","authors":"Jack Brookes, Samson Hall, Sascha Frühholz, Dominik R Bach","doi":"10.3758/s13428-023-02241-y","DOIUrl":"10.3758/s13428-023-02241-y","url":null,"abstract":"<p><p>All animals have to respond to immediate threats in order to survive. In non-human animals, a diversity of sophisticated behaviours has been observed, but research in humans is hampered by ethical considerations. Here, we present a novel immersive VR toolkit for the Unity engine that allows assessing threat-related behaviour in single, semi-interactive, and semi-realistic threat encounters. The toolkit contains a suite of fully modelled naturalistic environments, interactive objects, animated threats, and scripted systems. These are arranged together by the researcher as a means of creating an experimental manipulation, to form a series of independent \"episodes\" in immersive VR. Several specifically designed tools aid the design of these episodes, including a system to allow for pre-sequencing the movement plans of animal threats. Episodes can be built with the assets included in the toolkit, but also easily extended with custom scripts, threats, and environments if required. During the experiments, the software stores behavioural, movement, and eye tracking data. With this software, we aim to facilitate the use of immersive VR in human threat avoidance research and thus to close a gap in the understanding of human behaviour under threat.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"5040-5054"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41099134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-09-25DOI: 10.3758/s13428-023-02212-3
Hubert Plisiecki, Adam Sobieszek
Data on the emotionality of words is important for the selection of experimental stimuli and sentiment analysis on large bodies of text. While norms for valence and arousal have been thoroughly collected in English, most languages do not have access to such large datasets. Moreover, theoretical developments lead to new dimensions being proposed, the norms for which are only partially available. In this paper, we propose a transformer-based neural network architecture for semantic and emotional norms extrapolation that predicts a whole ensemble of norms at once while achieving state-of-the-art correlations with human judgements on each. We improve on the previous approaches with regards to the correlations with human judgments by Δr = 0.1 on average. We precisely discuss the limitations of norm extrapolation as a whole, with a special focus on the introduced model. Further, we propose a unique practical application of our model by proposing a method of stimuli selection which performs unsupervised control by picking words that match in their semantic content. As the proposed model can easily be applied to different languages, we provide norm extrapolations for English, Polish, Dutch, German, French, and Spanish. To aid researchers, we also provide access to the extrapolation networks through an accessible web application.
{"title":"Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection.","authors":"Hubert Plisiecki, Adam Sobieszek","doi":"10.3758/s13428-023-02212-3","DOIUrl":"10.3758/s13428-023-02212-3","url":null,"abstract":"<p><p>Data on the emotionality of words is important for the selection of experimental stimuli and sentiment analysis on large bodies of text. While norms for valence and arousal have been thoroughly collected in English, most languages do not have access to such large datasets. Moreover, theoretical developments lead to new dimensions being proposed, the norms for which are only partially available. In this paper, we propose a transformer-based neural network architecture for semantic and emotional norms extrapolation that predicts a whole ensemble of norms at once while achieving state-of-the-art correlations with human judgements on each. We improve on the previous approaches with regards to the correlations with human judgments by Δr = 0.1 on average. We precisely discuss the limitations of norm extrapolation as a whole, with a special focus on the introduced model. Further, we propose a unique practical application of our model by proposing a method of stimuli selection which performs unsupervised control by picking words that match in their semantic content. As the proposed model can easily be applied to different languages, we provide norm extrapolations for English, Polish, Dutch, German, French, and Spanish. To aid researchers, we also provide access to the extrapolation networks through an accessible web application.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4716-4731"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41113756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-09-20DOI: 10.3758/s13428-023-02232-z
Sean Commins, Antoine Coutrot, Michael Hornberger, Hugo J Spiers, Rafael De Andrade Moral
Everyone learns differently, but individual performance is often ignored in favour of a group-level analysis. Using data from four different experiments, we show that generalised linear mixed models (GLMMs) and extensions can be used to examine individual learning patterns. Producing ellipsoids and cluster analyses based on predicted random effects, individual learning patterns can be identified, clustered and used for comparisons across various experimental conditions or groups. This analysis can handle a range of datasets including discrete, continuous, censored and non-censored, as well as different experimental conditions, sample sizes and trial numbers. Using this approach, we show that learning a face-named paired associative task produced individuals that can learn quickly, with the performance of some remaining high, but with a drop-off in others, whereas other individuals show poor performance throughout the learning period. We see this more clearly in a virtual navigation spatial learning task (NavWell). Two prominent clusters of learning emerged, one showing individuals who produced a rapid learning and another showing a slow and gradual learning pattern. Using data from another spatial learning task (Sea Hero Quest), we show that individuals' performance generally reflects their age category, but not always. Overall, using this analytical approach may help practitioners in education and medicine to identify those individuals who might need extra help and attention. In addition, identifying learning patterns may enable further investigation of the underlying neural, biological, environmental and other factors associated with these individuals.
{"title":"Examining individual learning patterns using generalised linear mixed models.","authors":"Sean Commins, Antoine Coutrot, Michael Hornberger, Hugo J Spiers, Rafael De Andrade Moral","doi":"10.3758/s13428-023-02232-z","DOIUrl":"10.3758/s13428-023-02232-z","url":null,"abstract":"<p><p>Everyone learns differently, but individual performance is often ignored in favour of a group-level analysis. Using data from four different experiments, we show that generalised linear mixed models (GLMMs) and extensions can be used to examine individual learning patterns. Producing ellipsoids and cluster analyses based on predicted random effects, individual learning patterns can be identified, clustered and used for comparisons across various experimental conditions or groups. This analysis can handle a range of datasets including discrete, continuous, censored and non-censored, as well as different experimental conditions, sample sizes and trial numbers. Using this approach, we show that learning a face-named paired associative task produced individuals that can learn quickly, with the performance of some remaining high, but with a drop-off in others, whereas other individuals show poor performance throughout the learning period. We see this more clearly in a virtual navigation spatial learning task (NavWell). Two prominent clusters of learning emerged, one showing individuals who produced a rapid learning and another showing a slow and gradual learning pattern. Using data from another spatial learning task (Sea Hero Quest), we show that individuals' performance generally reflects their age category, but not always. Overall, using this analytical approach may help practitioners in education and medicine to identify those individuals who might need extra help and attention. In addition, identifying learning patterns may enable further investigation of the underlying neural, biological, environmental and other factors associated with these individuals.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4930-4945"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41146850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-10-16DOI: 10.3758/s13428-023-02218-x
Karolina Hansen, Aleksandra Świderska
Researchers in behavioral sciences often use closed-ended questions, forcing participants to express even complex impressions or attitudes through a set of predetermined answers. Even if this has many advantages, people's opinions can be much richer. We argue for assessing them using different methods, including open-ended questions. Manual coding of open-ended answers requires much effort, but automated tools help to analyze them more easily. In order to investigate how attitudes towards outgroups can be assessed and analyzed with different methods, we carried out two representative surveys in Poland. We asked closed- and open-ended questions about what Poland should do regarding the influx of refugees. While the attitudes measured with closed-ended questions were rather negative, those that emerged from open-ended answers were not only richer, but also more positive. Many themes that emerged in the manual coding were also identified in automated text analyses with Meaning Extraction Helper (MEH). Using Linguistic Inquiry and Word Count (LIWC) and Sentiment Analyzer from the Common Language Resources and Technology Infrastructure (CLARIN), we compared the difference between the studies in the emotional tone of the answers. Our research confirms the high usefulness of open-ended questions in surveys and shows how methods of textual data analysis help in understanding people's attitudes towards outgroup members. Based on our methods comparison, researchers can choose a method or combine methods in a way that best fits their needs.
{"title":"Integrating open- and closed-ended questions on attitudes towards outgroups with different methods of text analysis.","authors":"Karolina Hansen, Aleksandra Świderska","doi":"10.3758/s13428-023-02218-x","DOIUrl":"10.3758/s13428-023-02218-x","url":null,"abstract":"<p><p>Researchers in behavioral sciences often use closed-ended questions, forcing participants to express even complex impressions or attitudes through a set of predetermined answers. Even if this has many advantages, people's opinions can be much richer. We argue for assessing them using different methods, including open-ended questions. Manual coding of open-ended answers requires much effort, but automated tools help to analyze them more easily. In order to investigate how attitudes towards outgroups can be assessed and analyzed with different methods, we carried out two representative surveys in Poland. We asked closed- and open-ended questions about what Poland should do regarding the influx of refugees. While the attitudes measured with closed-ended questions were rather negative, those that emerged from open-ended answers were not only richer, but also more positive. Many themes that emerged in the manual coding were also identified in automated text analyses with Meaning Extraction Helper (MEH). Using Linguistic Inquiry and Word Count (LIWC) and Sentiment Analyzer from the Common Language Resources and Technology Infrastructure (CLARIN), we compared the difference between the studies in the emotional tone of the answers. Our research confirms the high usefulness of open-ended questions in surveys and shows how methods of textual data analysis help in understanding people's attitudes towards outgroup members. Based on our methods comparison, researchers can choose a method or combine methods in a way that best fits their needs.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4802-4822"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41232063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-08-22DOI: 10.3758/s13428-023-02209-y
Jiwei Zhang, Chun Wang, Jing Lu
Item revisiting behavior is one of the most frequently occurring test-taking strategies, and it can decrease test anxiety and improve test validity. Examinees either confirm the initial answers due to persistence of their beliefs or change to different answers after careful rethought on each part of the questions. Item revisiting sequences as collateral information reveal the examinees' underlying psychological processes, such as motivation, effort, and engagement, which supports policy makers in taking further steps to facilitate instructions for the examinees. Item revisiting behavior is commonly correlated with the latent traits of examinees, and it needs to be properly analyzed in order to make valid statistical inference. In this paper, we proposed a novel item revisiting model, in which a monotonicity assumption is considered based on the observation that examinees are more likely to revisit the current item if more revisiting behavior occurs previously. Three simulation studies were conducted: (1) to evaluate the performance of the proposed Bayesian estimation algorithm for the new model; (2) to show that ignoring item revisiting sequences induces biased parameter estimates; (3) to assess the model fit of the proposed model with the ignorable and nonignorable item revisiting behavior assumptions. The results indicate that item revisiting behavior can be effectively utilized in conjunction with responses and response times to improve parameter estimation precision. A real data example is provided to illustrate the application of the proposed model.
{"title":"Modeling item revisiting behavior in computer-based testing: Exploring the effect of item revisitations as collateral information.","authors":"Jiwei Zhang, Chun Wang, Jing Lu","doi":"10.3758/s13428-023-02209-y","DOIUrl":"10.3758/s13428-023-02209-y","url":null,"abstract":"<p><p>Item revisiting behavior is one of the most frequently occurring test-taking strategies, and it can decrease test anxiety and improve test validity. Examinees either confirm the initial answers due to persistence of their beliefs or change to different answers after careful rethought on each part of the questions. Item revisiting sequences as collateral information reveal the examinees' underlying psychological processes, such as motivation, effort, and engagement, which supports policy makers in taking further steps to facilitate instructions for the examinees. Item revisiting behavior is commonly correlated with the latent traits of examinees, and it needs to be properly analyzed in order to make valid statistical inference. In this paper, we proposed a novel item revisiting model, in which a monotonicity assumption is considered based on the observation that examinees are more likely to revisit the current item if more revisiting behavior occurs previously. Three simulation studies were conducted: (1) to evaluate the performance of the proposed Bayesian estimation algorithm for the new model; (2) to show that ignoring item revisiting sequences induces biased parameter estimates; (3) to assess the model fit of the proposed model with the ignorable and nonignorable item revisiting behavior assumptions. The results indicate that item revisiting behavior can be effectively utilized in conjunction with responses and response times to improve parameter estimation precision. A real data example is provided to illustrate the application of the proposed model.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4661-4681"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10108094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-08-24DOI: 10.3758/s13428-023-02214-1
Man Zhang, Zeping Liu, Mona Roxana Botezatu, Qinpu Dang, Qiming Yuan, Jinzhuo Han, Li Liu, Taomei Guo
Lexical databases are essential tools for studies on language processing and acquisition. Most previous Chinese lexical databases have focused on materials for adults, yet little is known about reading materials for children and how lexical properties from these materials affect children's reading comprehension. In the present study, we provided the first large database of 2999 Chinese characters and 2182 words collected from the official textbooks recently issued by the Ministry of Education (MOE) of the People's Republic of China for most elementary schools in Mainland China, as well as norms from both school-aged children and adults. The database incorporates key orthographic, phonological, and semantic factors from these lexical units. A word-naming task was used to investigate the effects of these factors in character and word processing in both adults and children. The results suggest that: (1) as the grade level increases, visual complexity of those characters and words increases whereas semantic richness and frequency decreases; (2) the effects of lexical predictors on processing both characters and words vary across children and adults; (3) the effect of age of acquisition shows different patterns on character and word-naming performance. The database is available on Open Science Framework (OSF) ( https://osf.io/ynk8c/?view_only=5186bd68549340bd923e9b6531d2c820 ) for future studies on Chinese language development.
{"title":"A large-scale database of Chinese characters and words collected from elementary school textbooks.","authors":"Man Zhang, Zeping Liu, Mona Roxana Botezatu, Qinpu Dang, Qiming Yuan, Jinzhuo Han, Li Liu, Taomei Guo","doi":"10.3758/s13428-023-02214-1","DOIUrl":"10.3758/s13428-023-02214-1","url":null,"abstract":"<p><p>Lexical databases are essential tools for studies on language processing and acquisition. Most previous Chinese lexical databases have focused on materials for adults, yet little is known about reading materials for children and how lexical properties from these materials affect children's reading comprehension. In the present study, we provided the first large database of 2999 Chinese characters and 2182 words collected from the official textbooks recently issued by the Ministry of Education (MOE) of the People's Republic of China for most elementary schools in Mainland China, as well as norms from both school-aged children and adults. The database incorporates key orthographic, phonological, and semantic factors from these lexical units. A word-naming task was used to investigate the effects of these factors in character and word processing in both adults and children. The results suggest that: (1) as the grade level increases, visual complexity of those characters and words increases whereas semantic richness and frequency decreases; (2) the effects of lexical predictors on processing both characters and words vary across children and adults; (3) the effect of age of acquisition shows different patterns on character and word-naming performance. The database is available on Open Science Framework (OSF) ( https://osf.io/ynk8c/?view_only=5186bd68549340bd923e9b6531d2c820 ) for future studies on Chinese language development.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4732-4757"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10423124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-10-20DOI: 10.3758/s13428-023-02251-w
Freya Joessel, Swann Pichon, Daphne Bavelier
Flow has been defined as a state of full immersion that may emerge when the skills of a person match the challenge of an activity. It is a special case of being on task, as during flow, keeping focused on the task feels effortless. Most experimental investigations of the neural or physiological correlates of flow contrast conditions with different levels of challenge. Yet comparing different levels of challenge that are too distant may trigger states where the participant is off task, such as boredom or frustration. Thus, it remains unclear whether previously observed differences ascribed to flow may rather reflect differences in how much participants were on task-trying their best-across the contrasted conditions. To remedy this, we introduce a method to manipulate flow by contrasting two video game play conditions at personalized levels of difficulty calibrated such that participants similarly tried their best in both conditions. Across three experiments (> 90 participants), higher flow was robustly reported in our high-flow than in our low-flow condition (mean effect size d = 1.31). Cardiac, respiratory, and skin conductance measures confirmed the known difference between a period of rest and the two on-task conditions of high and low flow, but failed to distinguish between these latter two. In light of the conflicting findings regarding the physiological correlates of flow, we discuss the importance of ensuring a low-flow baseline condition that maintains participants on task, and propose that the present method provides a methodological advance toward that goal.
{"title":"A video-game-based method to induce states of high and low flow.","authors":"Freya Joessel, Swann Pichon, Daphne Bavelier","doi":"10.3758/s13428-023-02251-w","DOIUrl":"10.3758/s13428-023-02251-w","url":null,"abstract":"<p><p>Flow has been defined as a state of full immersion that may emerge when the skills of a person match the challenge of an activity. It is a special case of being on task, as during flow, keeping focused on the task feels effortless. Most experimental investigations of the neural or physiological correlates of flow contrast conditions with different levels of challenge. Yet comparing different levels of challenge that are too distant may trigger states where the participant is off task, such as boredom or frustration. Thus, it remains unclear whether previously observed differences ascribed to flow may rather reflect differences in how much participants were on task-trying their best-across the contrasted conditions. To remedy this, we introduce a method to manipulate flow by contrasting two video game play conditions at personalized levels of difficulty calibrated such that participants similarly tried their best in both conditions. Across three experiments (> 90 participants), higher flow was robustly reported in our high-flow than in our low-flow condition (mean effect size d = 1.31). Cardiac, respiratory, and skin conductance measures confirmed the known difference between a period of rest and the two on-task conditions of high and low flow, but failed to distinguish between these latter two. In light of the conflicting findings regarding the physiological correlates of flow, we discuss the importance of ensuring a low-flow baseline condition that maintains participants on task, and propose that the present method provides a methodological advance toward that goal.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"5128-5160"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289307/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49673816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The COVID-19 pandemic massively changed the context and feasibility of developmental research. This new reality, as well as considerations about sample diversity and naturalistic settings for developmental research, highlights the need for solutions for online studies. In this article, we present e-Babylab, an open-source browser-based tool for unmoderated online studies targeted for young children and babies. e-Babylab offers an intuitive graphical user interface for study creation and management of studies, users, participant data, and stimulus material, with no programming skills required. Various kinds of audiovisual media can be presented as stimuli, and possible measures include webcam recordings, audio recordings, key presses, mouse-click/touch coordinates, and reaction times. An additional feature of e-Babylab is the possibility to administer short adaptive versions of MacArthur-Bates Communicative Development Inventories (Chai et al. Journal of Speech, Language, and Hearing Research, 63, 3488-3500, 2020). Information pages, consent forms, and participant forms are customizable. e-Babylab has been used with a variety of measures and paradigms in over 12 studies with children aged 12 months to 8 years (n = 1516). We briefly summarize some results of these studies to demonstrate that data quality, participant engagement, and overall results are comparable between laboratory and online settings. Finally, we discuss helpful tips for using e-Babylab and present plans for upgrades.
{"title":"e-Babylab: An open-source browser-based tool for unmoderated online developmental studies.","authors":"Chang Huan Lo, Jonas Hermes, Natalia Kartushina, Julien Mayor, Nivedita Mani","doi":"10.3758/s13428-023-02200-7","DOIUrl":"10.3758/s13428-023-02200-7","url":null,"abstract":"<p><p>The COVID-19 pandemic massively changed the context and feasibility of developmental research. This new reality, as well as considerations about sample diversity and naturalistic settings for developmental research, highlights the need for solutions for online studies. In this article, we present e-Babylab, an open-source browser-based tool for unmoderated online studies targeted for young children and babies. e-Babylab offers an intuitive graphical user interface for study creation and management of studies, users, participant data, and stimulus material, with no programming skills required. Various kinds of audiovisual media can be presented as stimuli, and possible measures include webcam recordings, audio recordings, key presses, mouse-click/touch coordinates, and reaction times. An additional feature of e-Babylab is the possibility to administer short adaptive versions of MacArthur-Bates Communicative Development Inventories (Chai et al. Journal of Speech, Language, and Hearing Research, 63, 3488-3500, 2020). Information pages, consent forms, and participant forms are customizable. e-Babylab has been used with a variety of measures and paradigms in over 12 studies with children aged 12 months to 8 years (n = 1516). We briefly summarize some results of these studies to demonstrate that data quality, participant engagement, and overall results are comparable between laboratory and online settings. Finally, we discuss helpful tips for using e-Babylab and present plans for upgrades.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4530-4552"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289032/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10423123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-10-04DOI: 10.3758/s13428-023-02199-x
Tobias Otto, Jonas Rose
In this work, we describe a new open-source MATLAB toolbox for the control of behavioral experiments. The toolbox caters to very different types of experiments in different species, and with different underlying hardware. Typical examples are operant chambers in animals, with or without neurophysiology, behavioral experiments in human subjects, and neurophysiological recordings in humans such as EEG and fMRI. In addition, the toolbox supports communication via Ethernet to either control and monitor one or several experimental setups remotely or to implement distributed paradigms across different computers. This flexibility is possible, since the toolbox supports a wide range of hardware, some of which is custom developments. An example is a fast network-based digital-IO device for the communication with experimental hardware such as feeders or triggers in neurophysiological setups. We also included functions for online video analysis allowing paradigms to be contingent on responses to a screen, the head movement of a bird in an operant chamber, or the physical location of an animal in an open arena. While the toolbox is well tested and many components of it have been in use for many years, we do not see it as a finished product but rather a continuing development with a focus on easy extendibility and customization.
{"title":"The open toolbox for behavioral research.","authors":"Tobias Otto, Jonas Rose","doi":"10.3758/s13428-023-02199-x","DOIUrl":"10.3758/s13428-023-02199-x","url":null,"abstract":"<p><p>In this work, we describe a new open-source MATLAB toolbox for the control of behavioral experiments. The toolbox caters to very different types of experiments in different species, and with different underlying hardware. Typical examples are operant chambers in animals, with or without neurophysiology, behavioral experiments in human subjects, and neurophysiological recordings in humans such as EEG and fMRI. In addition, the toolbox supports communication via Ethernet to either control and monitor one or several experimental setups remotely or to implement distributed paradigms across different computers. This flexibility is possible, since the toolbox supports a wide range of hardware, some of which is custom developments. An example is a fast network-based digital-IO device for the communication with experimental hardware such as feeders or triggers in neurophysiological setups. We also included functions for online video analysis allowing paradigms to be contingent on responses to a screen, the head movement of a bird in an operant chamber, or the physical location of an animal in an open arena. While the toolbox is well tested and many components of it have been in use for many years, we do not see it as a finished product but rather a continuing development with a focus on easy extendibility and customization.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4522-4529"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41103214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}