Pub Date : 2023-05-01DOI: 10.1027/1015-5759/a000710
Hyemin Han, Kelsie J. Dawson, Youn-Jeng Choi
Abstract: Although the Moral Growth Mindset (MGM) Measure was tested and validated in general, whether it measures MGM consistently across people with different political perspectives associated with moral foundations, has not been tested. We examined measurement invariance (MI) and differential item functioning (DIF) across different political affiliations to test whether the MGM Measure functioned consistently. We also examined the relationship between MGM, moral foundations, and political affiliation with t-tests and regression analyses. The findings reported that first, at the test level, the strictest MI was achieved, so the measurement structure was consistent between the different political groups. Second, no item showed significant DIF, so the MGM Measure was not biased at the item level. Third, t-tests and regression analyses reported that MGM and its relationship with moral foundations were not significantly associated with political affiliation.
{"title":"Testing the Consistency of the Moral Growth Mindset Measure Across People With Different Political Perspectives","authors":"Hyemin Han, Kelsie J. Dawson, Youn-Jeng Choi","doi":"10.1027/1015-5759/a000710","DOIUrl":"https://doi.org/10.1027/1015-5759/a000710","url":null,"abstract":"Abstract: Although the Moral Growth Mindset (MGM) Measure was tested and validated in general, whether it measures MGM consistently across people with different political perspectives associated with moral foundations, has not been tested. We examined measurement invariance (MI) and differential item functioning (DIF) across different political affiliations to test whether the MGM Measure functioned consistently. We also examined the relationship between MGM, moral foundations, and political affiliation with t-tests and regression analyses. The findings reported that first, at the test level, the strictest MI was achieved, so the measurement structure was consistent between the different political groups. Second, no item showed significant DIF, so the MGM Measure was not biased at the item level. Third, t-tests and regression analyses reported that MGM and its relationship with moral foundations were not significantly associated with political affiliation.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135504455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1027/1015-5759/a000700
Martin Daumiller, Stefan Janke, Julia Hein, Raven Rinas, Oliver Dickhäuser, Markus Dresel
Abstract: Teaching quality is a crucial factor within higher education. Research on this topic often requires assessing teaching quality as a global construct through self-reports. However, such instruments are criticized due to the lack of alignment between teacher and student reports of instructional practices. We argue that while teachers might over- or under-estimate specific dimensions of teaching quality, the aggregation of these dimensions in the form of overarching teaching quality well reflects differences in teaching quality between teachers. Accordingly, we test a ten-item measure that allows faculty to self-report their teaching quality based on the aspects distinguished in the SEEQ ( Marsh, 1982 , 2007 ). Using 15,503 student assessments of teaching quality in 889 sessions taught by 97 faculty members, we conducted Doubly Latent Multi Level Modelling while considering bias and unfairness variables to model overarching teaching quality assessed by students, and simultaneously corrected for measurement error and potential distortions through the assessment situation. This global factor of teaching quality was strongly associated with teacher self-reported teaching quality (ρ = .74), which we interpret as evidence that global teacher reports of teaching quality can serve as sensible indicators of overarching teaching quality for nomothetic research in higher education.
{"title":"Teaching Quality in Higher Education","authors":"Martin Daumiller, Stefan Janke, Julia Hein, Raven Rinas, Oliver Dickhäuser, Markus Dresel","doi":"10.1027/1015-5759/a000700","DOIUrl":"https://doi.org/10.1027/1015-5759/a000700","url":null,"abstract":"Abstract: Teaching quality is a crucial factor within higher education. Research on this topic often requires assessing teaching quality as a global construct through self-reports. However, such instruments are criticized due to the lack of alignment between teacher and student reports of instructional practices. We argue that while teachers might over- or under-estimate specific dimensions of teaching quality, the aggregation of these dimensions in the form of overarching teaching quality well reflects differences in teaching quality between teachers. Accordingly, we test a ten-item measure that allows faculty to self-report their teaching quality based on the aspects distinguished in the SEEQ ( Marsh, 1982 , 2007 ). Using 15,503 student assessments of teaching quality in 889 sessions taught by 97 faculty members, we conducted Doubly Latent Multi Level Modelling while considering bias and unfairness variables to model overarching teaching quality assessed by students, and simultaneously corrected for measurement error and potential distortions through the assessment situation. This global factor of teaching quality was strongly associated with teacher self-reported teaching quality (ρ = .74), which we interpret as evidence that global teacher reports of teaching quality can serve as sensible indicators of overarching teaching quality for nomothetic research in higher education.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135315083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1027/1015-5759/a000777
Mark S. Allen, Davina A. Robson, D. Iliescu
{"title":"Face Validity","authors":"Mark S. Allen, Davina A. Robson, D. Iliescu","doi":"10.1027/1015-5759/a000777","DOIUrl":"https://doi.org/10.1027/1015-5759/a000777","url":null,"abstract":"","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46213190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-12DOI: 10.1027/1015-5759/a000756
Burcu Arslan, Caitlin Tenison, B. Finn
Abstract: Pauses represented in process data captured from digital learning and assessment tasks are defined as the time elapsed between two subsequent events. Pauses are used as a marker of unobservable cognitive processes, such as encoding, problem-solving, and planning, that underlie test takers’ subsequent observable actions in educational assessment. To make valid inferences about the underlying cognitive processes represented by pauses, we argue that applying a task-specific cognitive modeling approach is required. We discuss and demonstrate how to apply a task-specific, theoretical cognitive modeling approach for interpreting pauses. We believe that this approach will have value for educational researchers seeking to make valid, task-general inferences about test-taker cognition from pauses represented in process data.
{"title":"Going Beyond Observable Actions","authors":"Burcu Arslan, Caitlin Tenison, B. Finn","doi":"10.1027/1015-5759/a000756","DOIUrl":"https://doi.org/10.1027/1015-5759/a000756","url":null,"abstract":"Abstract: Pauses represented in process data captured from digital learning and assessment tasks are defined as the time elapsed between two subsequent events. Pauses are used as a marker of unobservable cognitive processes, such as encoding, problem-solving, and planning, that underlie test takers’ subsequent observable actions in educational assessment. To make valid inferences about the underlying cognitive processes represented by pauses, we argue that applying a task-specific cognitive modeling approach is required. We discuss and demonstrate how to apply a task-specific, theoretical cognitive modeling approach for interpreting pauses. We believe that this approach will have value for educational researchers seeking to make valid, task-general inferences about test-taker cognition from pauses represented in process data.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44512140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-12DOI: 10.1027/1015-5759/a000757
Philine Drake, J. Hartig, Manuel Froitzheim, Gunnar Mau, Hanna Schramm-Klein, M. Schuhen
Abstract: The present study aims to investigate elementary school children’s self-control as an important aspect of their purchasing literacy in a simulated supermarket. To this end, 136 children were asked to shop on a limited budget and work through a given shopping list. We processed the data of this task in two ways: First, we combined process and product data into a common score for a differentiated assessment of task performance. Second, we derived theory-based behavioral indicators from the log data. By means of a structural equation model, we confirmed that the covariance between them could be explained by a factor of self-control. Within the structural equation model, we also investigated whether self-controlled behavior mediated the relationship between self-reported impulsivity and task performance. This could not be confirmed, even though self-controlled behavior was positively related to task performance. Self-control and impulsivity both correlated positively with a distrustful attitude toward advertising. Higher self-control was also significantly related to better monitoring one’s finances at the point of sale.
{"title":"Theory-Based Behavioral Indicators for Children’s Purchasing Self-Control in a Computer-Based Simulated Supermarket","authors":"Philine Drake, J. Hartig, Manuel Froitzheim, Gunnar Mau, Hanna Schramm-Klein, M. Schuhen","doi":"10.1027/1015-5759/a000757","DOIUrl":"https://doi.org/10.1027/1015-5759/a000757","url":null,"abstract":"Abstract: The present study aims to investigate elementary school children’s self-control as an important aspect of their purchasing literacy in a simulated supermarket. To this end, 136 children were asked to shop on a limited budget and work through a given shopping list. We processed the data of this task in two ways: First, we combined process and product data into a common score for a differentiated assessment of task performance. Second, we derived theory-based behavioral indicators from the log data. By means of a structural equation model, we confirmed that the covariance between them could be explained by a factor of self-control. Within the structural equation model, we also investigated whether self-controlled behavior mediated the relationship between self-reported impulsivity and task performance. This could not be confirmed, even though self-controlled behavior was positively related to task performance. Self-control and impulsivity both correlated positively with a distrustful attitude toward advertising. Higher self-control was also significantly related to better monitoring one’s finances at the point of sale.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57277541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-30DOI: 10.1027/1015-5759/a000762
Carmen Cervone, Caterina Suitner, Luciana Carraro, A. Maass
Abstract: In three studies, we developed and validated the Belief-aligned Collective Action scale (BCA), a new measure of collective action that discriminates the so far confounded engagement in collective action from the ideological stance on the issue. In Studies 1a ( N = 585 Italian adult participants, 61% women) and 1b ( N = 296 British adult participants, 52% women), an Exploratory Factor Analysis identified two factors, Normative and Non-normative actions. In Study 2 ( N = 602 Italian adult participants, 50% women), a bifactor Confirmatory Factor Analysis showed an adequate fit of the two-factor structure. Across studies, the scale presents good internal reliability (as indicated by Cronbach’s α and ω total) and correlations in the predicted direction with common predictors of collective action, namely efficacy, anger, and group identity. Furthermore, Study 2 shows the generalizability of the scale to multiple topics, of which some are more relevant to left-wing people (e.g., wealth tax) and some to right-wing people (e.g., abortion). In these cases, we find no evidence for the effect of ideological variables such as political orientation and system justification. This tool allows researchers to assess collective action unbiasedly, contributing to the bridging of the ideological knowledge gap in the field of social psychology.
{"title":"An Impartial Measure of Collective Action","authors":"Carmen Cervone, Caterina Suitner, Luciana Carraro, A. Maass","doi":"10.1027/1015-5759/a000762","DOIUrl":"https://doi.org/10.1027/1015-5759/a000762","url":null,"abstract":"Abstract: In three studies, we developed and validated the Belief-aligned Collective Action scale (BCA), a new measure of collective action that discriminates the so far confounded engagement in collective action from the ideological stance on the issue. In Studies 1a ( N = 585 Italian adult participants, 61% women) and 1b ( N = 296 British adult participants, 52% women), an Exploratory Factor Analysis identified two factors, Normative and Non-normative actions. In Study 2 ( N = 602 Italian adult participants, 50% women), a bifactor Confirmatory Factor Analysis showed an adequate fit of the two-factor structure. Across studies, the scale presents good internal reliability (as indicated by Cronbach’s α and ω total) and correlations in the predicted direction with common predictors of collective action, namely efficacy, anger, and group identity. Furthermore, Study 2 shows the generalizability of the scale to multiple topics, of which some are more relevant to left-wing people (e.g., wealth tax) and some to right-wing people (e.g., abortion). In these cases, we find no evidence for the effect of ideological variables such as political orientation and system justification. This tool allows researchers to assess collective action unbiasedly, contributing to the bridging of the ideological knowledge gap in the field of social psychology.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42260519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-23DOI: 10.1027/1015-5759/a000761
Teresa M. Ober, Cheng Liu, Ying Cheng
Abstract: We develop and validate a short self-report measure of test anxiety, the Trait Test Anxiety Inventory – Short (TTAI-S) following the Kane (2013) validation framework. Data were collected from three independent samples of young adults in the US ( N = 629; Mage = 22.25 years). Evidence was gathered to support three aspects of the validity argument (i.e., scoring, extrapolation, and generalization). Good internal consistency and confirmed structure of a single factor supported scoring inferences. Scalar measurement invariance between different samples (Internet vs. undergraduate students) and demographic subgroups (i.e., gender, race/ethnicity, and parental educational attainment) provided evidence for generalization inferences. Significant associations between the TTAI-S score and theoretically relevant (state test anxiety, performance expectation, and self-confidence in math) and weaker associations with less relevant constructs (enjoyment, motivation, and values in learning math) substantiated extrapolation inferences. Having established measurement invariance, we examined demographic differences and found that students historically underserved or underrepresented in STEM disciplines reported greater test anxiety than their counterparts. These findings support the validity of the TTAI-S, a concise measure that is easy to administer and easy to score. The TTAI-S may be used to further investigate trait test anxiety for a diverse population, particularly factors that may contribute to or mitigate group differences.
{"title":"Development, Validation, and Evidence of Measurement Invariance of a Shortened Measure of Trait Test Anxiety","authors":"Teresa M. Ober, Cheng Liu, Ying Cheng","doi":"10.1027/1015-5759/a000761","DOIUrl":"https://doi.org/10.1027/1015-5759/a000761","url":null,"abstract":"Abstract: We develop and validate a short self-report measure of test anxiety, the Trait Test Anxiety Inventory – Short (TTAI-S) following the Kane (2013) validation framework. Data were collected from three independent samples of young adults in the US ( N = 629; Mage = 22.25 years). Evidence was gathered to support three aspects of the validity argument (i.e., scoring, extrapolation, and generalization). Good internal consistency and confirmed structure of a single factor supported scoring inferences. Scalar measurement invariance between different samples (Internet vs. undergraduate students) and demographic subgroups (i.e., gender, race/ethnicity, and parental educational attainment) provided evidence for generalization inferences. Significant associations between the TTAI-S score and theoretically relevant (state test anxiety, performance expectation, and self-confidence in math) and weaker associations with less relevant constructs (enjoyment, motivation, and values in learning math) substantiated extrapolation inferences. Having established measurement invariance, we examined demographic differences and found that students historically underserved or underrepresented in STEM disciplines reported greater test anxiety than their counterparts. These findings support the validity of the TTAI-S, a concise measure that is easy to administer and easy to score. The TTAI-S may be used to further investigate trait test anxiety for a diverse population, particularly factors that may contribute to or mitigate group differences.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48340799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: 10.1027/1015-5759/a000749
J. Veerbeek, B. Vogelaar
Abstract: The study investigated the value of process data obtained from a group-administered computerized dynamic test of analogical reasoning, consisting of a pretest-training-posttest design. We sought to evaluate the effects of training on processes and performance, and the relationships between process measures and performance on the dynamic test. Participants were N = 86 primary school children ( Mage = 8.11 years, SD = 0.63). The test consisted of constructed-response geometrical analogy items, requiring several actions to construct an answer. Process data enabled scoring of the total time, the time taken for initial planning of the task, the time taken for checking the answer that was provided, and variation in solving time. Training led to improved performance compared to repeated practice, but this improvement was not reflected in task-solving processes. Almost all process measures were related to performance, but the effects of training or repeated practice on this relationship differed widely between measures. In conclusion, the findings seemed to indicate that investigating process indicators within computerized dynamic testing of analogical reasoning ability provided information about children’s learning processes, but that not all processes were affected in the same way by training.
{"title":"Computerized Process-Oriented Dynamic Testing of Children’s Ability to Reason by Analogy Using Log Data","authors":"J. Veerbeek, B. Vogelaar","doi":"10.1027/1015-5759/a000749","DOIUrl":"https://doi.org/10.1027/1015-5759/a000749","url":null,"abstract":"Abstract: The study investigated the value of process data obtained from a group-administered computerized dynamic test of analogical reasoning, consisting of a pretest-training-posttest design. We sought to evaluate the effects of training on processes and performance, and the relationships between process measures and performance on the dynamic test. Participants were N = 86 primary school children ( Mage = 8.11 years, SD = 0.63). The test consisted of constructed-response geometrical analogy items, requiring several actions to construct an answer. Process data enabled scoring of the total time, the time taken for initial planning of the task, the time taken for checking the answer that was provided, and variation in solving time. Training led to improved performance compared to repeated practice, but this improvement was not reflected in task-solving processes. Almost all process measures were related to performance, but the effects of training or repeated practice on this relationship differed widely between measures. In conclusion, the findings seemed to indicate that investigating process indicators within computerized dynamic testing of analogical reasoning ability provided information about children’s learning processes, but that not all processes were affected in the same way by training.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45712674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: 10.1027/1015-5759/a000748
B. Zumbo, B. Maddox, Naomi M. Care
Abstract: There is no consensus among assessment researchers about many of the central problems of response process data, including what is it and what is it comprised of. The Standards for Educational and Psychological Testing ( American Educational Research Association et al., 2014 ) locate process data within their five sources of validity evidence. However, we rarely see a conceptualization of response processes; rather, the focus is on the techniques and methods of assembling response process indices or statistical models. The method often overrides clear definitions, and, as a field, we may therefore conflate method and methodology – much like we have conflated validity and validation ( Zumbo, 2007 ). In this paper, we aim to clear the conceptual ground to explore the scope of a holistic framework for the validation of process and product. We review prominent conceptualizations of response processes and their sources and explore some fundamental questions: Should we make a theoretical and practical distinction between response processes and response data? To what extent do the uses of process data reflect the principles of deliberate, educational, and psychological measurement? To answer these questions, we consider the case of item response times and the potential for variation associated with disability and neurodiversity.
{"title":"Process and Product in Computer-Based Assessments","authors":"B. Zumbo, B. Maddox, Naomi M. Care","doi":"10.1027/1015-5759/a000748","DOIUrl":"https://doi.org/10.1027/1015-5759/a000748","url":null,"abstract":"Abstract: There is no consensus among assessment researchers about many of the central problems of response process data, including what is it and what is it comprised of. The Standards for Educational and Psychological Testing ( American Educational Research Association et al., 2014 ) locate process data within their five sources of validity evidence. However, we rarely see a conceptualization of response processes; rather, the focus is on the techniques and methods of assembling response process indices or statistical models. The method often overrides clear definitions, and, as a field, we may therefore conflate method and methodology – much like we have conflated validity and validation ( Zumbo, 2007 ). In this paper, we aim to clear the conceptual ground to explore the scope of a holistic framework for the validation of process and product. We review prominent conceptualizations of response processes and their sources and explore some fundamental questions: Should we make a theoretical and practical distinction between response processes and response data? To what extent do the uses of process data reflect the principles of deliberate, educational, and psychological measurement? To answer these questions, we consider the case of item response times and the potential for variation associated with disability and neurodiversity.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44906567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: 10.1027/1015-5759/a000758
A. Pokropek, Tomasz Żółtak, M. Muszyński
Abstract: Web surveys offer new research possibilities, but they also have specific problems. One of them is a higher risk of careless, inattentive, or otherwise invalid responses. Using paradata, which are data collected apart from reactionary data, is one of the potential tools that can help to screen for problematic responses in web-based surveys. One of the most promising forms of paradata is the movement, or trajectory, of the cursor in making a response. This study constructed indicators of such data presented correlations between them and provided an interpretation and validation of these components by correlating them with previously known indices of careless responding. Finally, it tested cursor movement indices during different motivational states induced by experimental instructions. Cursor movement indices proved to be moderately related to classical careless responding indices but some of them (horizontal distance traveled as well speed and acceleration on vertical dimension) were as responsive to manipulation conditions as classical indices. The potential role of cursor movement indices in survey practice and future studies in this area are discussed.
{"title":"Mouse Chase","authors":"A. Pokropek, Tomasz Żółtak, M. Muszyński","doi":"10.1027/1015-5759/a000758","DOIUrl":"https://doi.org/10.1027/1015-5759/a000758","url":null,"abstract":"Abstract: Web surveys offer new research possibilities, but they also have specific problems. One of them is a higher risk of careless, inattentive, or otherwise invalid responses. Using paradata, which are data collected apart from reactionary data, is one of the potential tools that can help to screen for problematic responses in web-based surveys. One of the most promising forms of paradata is the movement, or trajectory, of the cursor in making a response. This study constructed indicators of such data presented correlations between them and provided an interpretation and validation of these components by correlating them with previously known indices of careless responding. Finally, it tested cursor movement indices during different motivational states induced by experimental instructions. Cursor movement indices proved to be moderately related to classical careless responding indices but some of them (horizontal distance traveled as well speed and acceleration on vertical dimension) were as responsive to manipulation conditions as classical indices. The potential role of cursor movement indices in survey practice and future studies in this area are discussed.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43638034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}