{"title":"课堂上学生自我报告的信度和效度的情境化评估","authors":"Pankaj Chavan, Ritayan Mitra, Abhinav Sarkar, Aditya Panwar","doi":"10.1007/s11423-023-10304-2","DOIUrl":null,"url":null,"abstract":"<p>The use of Experience Sampling Methods (ESM) to assess students’ experiences, motivation, and emotions by sending signals to students at random or fixed time points has grown due to recent technological advances. Such methods offer several advantages, such as capturing the construct in the moment (i.e., when the events are fresh on respondents’ minds) or providing a better understanding of the temporal and dynamic nature of the construct, and are often considered to be more valid than retrospective self-reports. This article investigates the validity and reliability of a variant of the ESM, the DEBE (an acronym for difficult, easy, boring and engaging, and pronounced ‘Debbie’) feedback, which captures student-driven (as and when the student wants to report) momentary self-reports of cognitive-affective states during a lecture. The DEBE feedback is collected through four buttons on mobile phones/laptops used by students. We collected DEBE feedback from several video lectures (N = 722, 8 lectures) in different courses and examined the threats to validity and reliability. Our analysis revealed variables such as student motivation, learning strategies, academic performance, and prior knowledge did not affect the feedback-giving behavior. Monte Carlo simulations showed that for a class size of 50 to 120, on average, 30 students can provide representative and actionable feedback, and the feedback was tolerant up to 20% of the students giving erroneous or biased feedback. The article discusses in detail the aforementioned and other validity and reliability threats that need to be considered when working with such data. These findings, although specific to the DEBE feedback, are intended to supplement the momentary self-report literature, and the study is expected to provide a roadmap for establishing validity and reliability of such novel data types.</p>","PeriodicalId":501584,"journal":{"name":"Educational Technology Research and Development","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A contextualized assessment of reliability and validity of student-initiated momentary self-reports during lectures\",\"authors\":\"Pankaj Chavan, Ritayan Mitra, Abhinav Sarkar, Aditya Panwar\",\"doi\":\"10.1007/s11423-023-10304-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The use of Experience Sampling Methods (ESM) to assess students’ experiences, motivation, and emotions by sending signals to students at random or fixed time points has grown due to recent technological advances. Such methods offer several advantages, such as capturing the construct in the moment (i.e., when the events are fresh on respondents’ minds) or providing a better understanding of the temporal and dynamic nature of the construct, and are often considered to be more valid than retrospective self-reports. This article investigates the validity and reliability of a variant of the ESM, the DEBE (an acronym for difficult, easy, boring and engaging, and pronounced ‘Debbie’) feedback, which captures student-driven (as and when the student wants to report) momentary self-reports of cognitive-affective states during a lecture. The DEBE feedback is collected through four buttons on mobile phones/laptops used by students. We collected DEBE feedback from several video lectures (N = 722, 8 lectures) in different courses and examined the threats to validity and reliability. Our analysis revealed variables such as student motivation, learning strategies, academic performance, and prior knowledge did not affect the feedback-giving behavior. Monte Carlo simulations showed that for a class size of 50 to 120, on average, 30 students can provide representative and actionable feedback, and the feedback was tolerant up to 20% of the students giving erroneous or biased feedback. The article discusses in detail the aforementioned and other validity and reliability threats that need to be considered when working with such data. These findings, although specific to the DEBE feedback, are intended to supplement the momentary self-report literature, and the study is expected to provide a roadmap for establishing validity and reliability of such novel data types.</p>\",\"PeriodicalId\":501584,\"journal\":{\"name\":\"Educational Technology Research and Development\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Educational Technology Research and Development\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11423-023-10304-2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Technology Research and Development","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11423-023-10304-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A contextualized assessment of reliability and validity of student-initiated momentary self-reports during lectures
The use of Experience Sampling Methods (ESM) to assess students’ experiences, motivation, and emotions by sending signals to students at random or fixed time points has grown due to recent technological advances. Such methods offer several advantages, such as capturing the construct in the moment (i.e., when the events are fresh on respondents’ minds) or providing a better understanding of the temporal and dynamic nature of the construct, and are often considered to be more valid than retrospective self-reports. This article investigates the validity and reliability of a variant of the ESM, the DEBE (an acronym for difficult, easy, boring and engaging, and pronounced ‘Debbie’) feedback, which captures student-driven (as and when the student wants to report) momentary self-reports of cognitive-affective states during a lecture. The DEBE feedback is collected through four buttons on mobile phones/laptops used by students. We collected DEBE feedback from several video lectures (N = 722, 8 lectures) in different courses and examined the threats to validity and reliability. Our analysis revealed variables such as student motivation, learning strategies, academic performance, and prior knowledge did not affect the feedback-giving behavior. Monte Carlo simulations showed that for a class size of 50 to 120, on average, 30 students can provide representative and actionable feedback, and the feedback was tolerant up to 20% of the students giving erroneous or biased feedback. The article discusses in detail the aforementioned and other validity and reliability threats that need to be considered when working with such data. These findings, although specific to the DEBE feedback, are intended to supplement the momentary self-report literature, and the study is expected to provide a roadmap for establishing validity and reliability of such novel data types.