Pub Date : 2024-06-18DOI: 10.1103/physrevphyseducres.20.010153
A. R. Piña, Zeynep Topdemir, John R. Thompson
As part of an effort to examine students’ mathematical sensemaking (MSM) in a spins-first quantum mechanics course during the transition from discrete (spin) to continuous (position) systems, students were asked to construct an eigenvalue equation for a one-dimensional position operator. A subset of responses took the general form of an eigenvalue equation written in Dirac notation. Symbolic blending, a combination of symbolic forms and conceptual blending, as well as a categorical framework for MSM, were used in the analysis. The data suggest two different symbolic forms for an eigenvalue equation that share a symbol template but have distinct conceptual schemata: A transformation that reproduces the original and to operate is to act. These symbolic forms, when blended with two sets of contextual knowledge, form the basis of three different interpretations of eigenvalue equations modeled here as conceptual blends. The analysis in this study serves as a novel example of, and preliminary evidence for, student engagement in sensemaking activities in the transition from discrete to continuous systems in a spins-first quantum mechanics course.
{"title":"Student understanding of eigenvalue equations in quantum mechanics: Symbolic blending and sensemaking analysis","authors":"A. R. Piña, Zeynep Topdemir, John R. Thompson","doi":"10.1103/physrevphyseducres.20.010153","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010153","url":null,"abstract":"As part of an effort to examine students’ mathematical sensemaking (MSM) in a spins-first quantum mechanics course during the transition from discrete (spin) to continuous (position) systems, students were asked to construct an eigenvalue equation for a one-dimensional position operator. A subset of responses took the general form of an eigenvalue equation written in Dirac notation. Symbolic blending, a combination of symbolic forms and conceptual blending, as well as a categorical framework for MSM, were used in the analysis. The data suggest two different symbolic forms for an eigenvalue equation that share a symbol template but have distinct conceptual schemata: A transformation that reproduces the original and to operate is to act. These symbolic forms, when blended with two sets of contextual knowledge, form the basis of three different interpretations of eigenvalue equations modeled here as conceptual blends. The analysis in this study serves as a novel example of, and preliminary evidence for, student engagement in sensemaking activities in the transition from discrete to continuous systems in a spins-first quantum mechanics course.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"8 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1103/physrevphyseducres.20.010152
Tong Wan, Zhongzhou Chen
Instructor’s feedback plays a critical role in students’ development of conceptual understanding and reasoning skills. However, grading student written responses and providing personalized feedback can take a substantial amount of time, especially in large enrollment courses. In this study, we explore using GPT-3.5 to write feedback on students’ written responses to conceptual questions with prompt engineering and few-shot learning techniques. In stage I, we used a small portion () of the student responses on one conceptual question to iteratively train GPT to generate feedback. Four of the responses paired with human-written feedback were included in the prompt as examples for GPT. We tasked GPT to generate feedback for another 16 responses and refined the prompt through several iterations. In stage II, we gave four student researchers (one graduate and three undergraduate researchers) the 16 responses as well as two versions of feedback, one written by the authors and the other by GPT. Students were asked to rate the correctness and usefulness of each feedback and to indicate which one was generated by GPT. The results showed that students tended to rate the feedback by human and GPT equally on correctness, but they all rated the feedback by GPT as more useful. Additionally, the success rates of identifying GPT’s feedback were low, ranging from 0.1 to 0.6. In stage III, we tasked GPT to generate feedback for the rest of the students’ responses (). The feedback messages were rated by four instructors based on the extent of modification needed if they were to give the feedback to students. All four instructors rated approximately 70% (ranging from 68% to 78%) of the feedback statements needing only minor or no modification. This study demonstrated the feasibility of using generative artificial intelligence (AI) as an assistant to generate feedback for student written responses with only a relatively small number of examples in the prompt. An AI assistant can be one of the solutions to substantially reduce time spent on grading student written responses.
{"title":"Exploring generative AI assisted feedback writing for students’ written responses to a physics conceptual question with prompt engineering and few-shot learning","authors":"Tong Wan, Zhongzhou Chen","doi":"10.1103/physrevphyseducres.20.010152","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010152","url":null,"abstract":"Instructor’s feedback plays a critical role in students’ development of conceptual understanding and reasoning skills. However, grading student written responses and providing personalized feedback can take a substantial amount of time, especially in large enrollment courses. In this study, we explore using GPT-3.5 to write feedback on students’ written responses to conceptual questions with prompt engineering and few-shot learning techniques. In stage I, we used a small portion (<math display=\"inline\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mrow><mi>n</mi><mo>=</mo><mn>2</mn><mn>0</mn></mrow></math>) of the student responses on one conceptual question to iteratively train GPT to generate feedback. Four of the responses paired with human-written feedback were included in the prompt as examples for GPT. We tasked GPT to generate feedback for another 16 responses and refined the prompt through several iterations. In stage II, we gave four student researchers (one graduate and three undergraduate researchers) the 16 responses as well as two versions of feedback, one written by the authors and the other by GPT. Students were asked to rate the correctness and usefulness of each feedback and to indicate which one was generated by GPT. The results showed that students tended to rate the feedback by human and GPT equally on correctness, but they all rated the feedback by GPT as more useful. Additionally, the success rates of identifying GPT’s feedback were low, ranging from 0.1 to 0.6. In stage III, we tasked GPT to generate feedback for the rest of the students’ responses (<math display=\"inline\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mrow><mi>n</mi><mo>=</mo><mn>6</mn><mn>5</mn></mrow></math>). The feedback messages were rated by four instructors based on the extent of modification needed if they were to give the feedback to students. All four instructors rated approximately 70% (ranging from 68% to 78%) of the feedback statements needing only minor or no modification. This study demonstrated the feasibility of using generative artificial intelligence (AI) as an assistant to generate feedback for student written responses with only a relatively small number of examples in the prompt. An AI assistant can be one of the solutions to substantially reduce time spent on grading student written responses.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"24 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1103/physrevphyseducres.20.010150
Christoph Hoyer, Raimund Girwidz
Vector fields are a highly abstract physical concept that is often taught using visualizations. Although vector representations are particularly suitable for visualizing quantitative data, they are often confusing, especially when describing real fields such as magnetic and electric fields, as the vector arrows can overlap. The present study investigates vector understanding at the end of secondary education. In particular, the extent to which the geometry of the field can be derived from conventional unit vector representations and representations with centered unit vectors was examined. To support this understanding, two exercises were compared. The unirepresentational exercise argued within the conventional unit vector representation, while the multirepresentational exercise attempted to support the link between centered and conventional unit vectors. The results show that almost all test subjects solved the items for generating vector representations correctly, but significant difficulties were encountered in interpreting vector representations. Drawing and interpreting vector representations therefore appear to be different skills that should be practiced intensively and in an integrated way. Various problems could be identified when interpreting vector representations. For example, the number of vectors is often erroneously used to estimate the strength of the field, although more vectors per surface element actually only increase the resolution of the representation. Here, however, the results suggest that the longitudinal density and the transverse density of the drawn vectors are perceived differently by the learners. Furthermore, the learners recognized the field’s geometry much more readily from centered unit vectors than from conventional unit vectors. Errors occur especially when interpreting the geometry of conventional unit vector representations of rotational fields and fields containing both sources and sinks while the geometries of fields containing only sinks were interpreted quite well. The comparison between the two training exercises showed that a promising approach to deepen students’ understanding would be to use an exercise that contrasts conventional and centered unit vector representations and explains how to translate from one representation to the other, rather than describing the main elements of only a single representation. Finally, based on the results of the study, we propose a strategy for teaching vector representations in schools. Given the significantly improved readability of the representation with centered unit vectors, the results even raise the question of whether this type of representation could possibly replace the conventional representation in textbooks and learning materials in the future.
{"title":"Vector representations and unit vector representations of fields: Problems of understanding and possible teaching strategies","authors":"Christoph Hoyer, Raimund Girwidz","doi":"10.1103/physrevphyseducres.20.010150","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010150","url":null,"abstract":"Vector fields are a highly abstract physical concept that is often taught using visualizations. Although vector representations are particularly suitable for visualizing quantitative data, they are often confusing, especially when describing real fields such as magnetic and electric fields, as the vector arrows can overlap. The present study investigates vector understanding at the end of secondary education. In particular, the extent to which the geometry of the field can be derived from conventional unit vector representations and representations with centered unit vectors was examined. To support this understanding, two exercises were compared. The unirepresentational exercise argued within the conventional unit vector representation, while the multirepresentational exercise attempted to support the link between centered and conventional unit vectors. The results show that almost all test subjects solved the items for generating vector representations correctly, but significant difficulties were encountered in interpreting vector representations. Drawing and interpreting vector representations therefore appear to be different skills that should be practiced intensively and in an integrated way. Various problems could be identified when interpreting vector representations. For example, the number of vectors is often erroneously used to estimate the strength of the field, although more vectors per surface element actually only increase the resolution of the representation. Here, however, the results suggest that the longitudinal density and the transverse density of the drawn vectors are perceived differently by the learners. Furthermore, the learners recognized the field’s geometry much more readily from centered unit vectors than from conventional unit vectors. Errors occur especially when interpreting the geometry of conventional unit vector representations of rotational fields and fields containing both sources and sinks while the geometries of fields containing only sinks were interpreted quite well. The comparison between the two training exercises showed that a promising approach to deepen students’ understanding would be to use an exercise that contrasts conventional and centered unit vector representations and explains how to translate from one representation to the other, rather than describing the main elements of only a single representation. Finally, based on the results of the study, we propose a strategy for teaching vector representations in schools. Given the significantly improved readability of the representation with centered unit vectors, the results even raise the question of whether this type of representation could possibly replace the conventional representation in textbooks and learning materials in the future.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"22 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141255730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1103/physrevphyseducres.20.010149
John Pace, John Hansen, John Stewart
Machine learning models were constructed to predict student performance in an introductory mechanics class at a large land-grant university in the United States using data from 2061 students. Students were classified as either being at risk of failing the course (earning a D or F) or not at risk (earning an A, B, or C). The models focused on variables available in the first few weeks of the class which could potentially allow for early interventions to help at-risk students. Multiple types of variables were used in the model: in-class variables (average homework and clicker quiz scores), institutional variables [college grade point average (GPA)], and noncognitive variables (self-efficacy). The substantial imbalance between the pass and fail rates of the course, with only about 10% of students failing, required modification to the machine learning algorithms. Decision threshold tuning and upsampling were successful in improving performance for at-risk students. Logistic regression combined with a decision threshold tuned to maximize balanced accuracy yielded the strongest classifier, with a DF accuracy of 83% and an ABC accuracy of 81%. Measures of variable importance involving changes in balanced accuracy identified homework grades, clicker grades, college GPA, and the fraction of college classes successfully completed as the most important variables in predicting success in introductory physics. Noncognitive variables added little predictive power to the models. Classification models with performance near the best-performing models using the full set of variables could be constructed with very few variables (homework average, clicker scores, and college GPA) using straightforward to implement algorithms, suggesting the application of these technologies may be fairly easy to include in many physics classes.
{"title":"Exploring techniques to improve machine learning’s identification of at-risk students in physics classes","authors":"John Pace, John Hansen, John Stewart","doi":"10.1103/physrevphyseducres.20.010149","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010149","url":null,"abstract":"Machine learning models were constructed to predict student performance in an introductory mechanics class at a large land-grant university in the United States using data from 2061 students. Students were classified as either being at risk of failing the course (earning a D or F) or not at risk (earning an A, B, or C). The models focused on variables available in the first few weeks of the class which could potentially allow for early interventions to help at-risk students. Multiple types of variables were used in the model: in-class variables (average homework and clicker quiz scores), institutional variables [college grade point average (GPA)], and noncognitive variables (self-efficacy). The substantial imbalance between the pass and fail rates of the course, with only about 10% of students failing, required modification to the machine learning algorithms. Decision threshold tuning and upsampling were successful in improving performance for at-risk students. Logistic regression combined with a decision threshold tuned to maximize balanced accuracy yielded the strongest classifier, with a DF accuracy of 83% and an ABC accuracy of 81%. Measures of variable importance involving changes in balanced accuracy identified homework grades, clicker grades, college GPA, and the fraction of college classes successfully completed as the most important variables in predicting success in introductory physics. Noncognitive variables added little predictive power to the models. Classification models with performance near the best-performing models using the full set of variables could be constructed with very few variables (homework average, clicker scores, and college GPA) using straightforward to implement algorithms, suggesting the application of these technologies may be fairly easy to include in many physics classes.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"46 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1103/physrevphyseducres.20.010148
Lan Yang, Leheng Huang, Xianqiu Wu, Jianwen Xiong, Lei Bao, Yang Xiao
In physics education, a number of studies have developed assessments of teachers’ knowledge of student understanding (KSU) of specific physics concepts with modified versions of existing concept inventories, in which teachers were asked to predict the popular incorrect answers from students. The results provide useful but indirect information to make inferences about teachers’ knowledge of the misconceptions that students may be using in answering the questions. To improve the assessment of teachers’ KSU, a new instrument is developed using a three-tier item design. The items were adapted from 17 questions from the Force Concept Inventory on force and motion. Each item was designed in three tiers, with tier 1 asking for teachers’ own answers to the question to test their content knowledge, tier 2 asking for teachers’ predictions of popular students’ incorrect answers, and tier 3 asking for teachers’ explanations of students’ incorrect answers in an open-ended form. The three-tier design captures teachers’ content knowledge, predictions, and explanations in a single item to allow explicit measures of teachers’ own content knowledge and their KSU on students’ misconceptions. The instrument was validated with preservice physics teachers, who were master-level graduate students in a normal university in China. The assessment results also suggest that the preservice teachers’ KSU of force and motion was only moderately developed, and their content knowledge was uncorrelated with their KSU. In addition, a four-level progression scale of KSU was also developed, which categorized the preservice teachers into five proficiency groups.
{"title":"Assessment of preservice physics teachers’ knowledge of student understanding of force and motion","authors":"Lan Yang, Leheng Huang, Xianqiu Wu, Jianwen Xiong, Lei Bao, Yang Xiao","doi":"10.1103/physrevphyseducres.20.010148","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010148","url":null,"abstract":"In physics education, a number of studies have developed assessments of teachers’ knowledge of student understanding (KSU) of specific physics concepts with modified versions of existing concept inventories, in which teachers were asked to predict the popular incorrect answers from students. The results provide useful but indirect information to make inferences about teachers’ knowledge of the misconceptions that students may be using in answering the questions. To improve the assessment of teachers’ KSU, a new instrument is developed using a three-tier item design. The items were adapted from 17 questions from the Force Concept Inventory on force and motion. Each item was designed in three tiers, with tier 1 asking for teachers’ own answers to the question to test their content knowledge, tier 2 asking for teachers’ predictions of popular students’ incorrect answers, and tier 3 asking for teachers’ explanations of students’ incorrect answers in an open-ended form. The three-tier design captures teachers’ content knowledge, predictions, and explanations in a single item to allow explicit measures of teachers’ own content knowledge and their KSU on students’ misconceptions. The instrument was validated with preservice physics teachers, who were master-level graduate students in a normal university in China. The assessment results also suggest that the preservice teachers’ KSU of force and motion was only moderately developed, and their content knowledge was uncorrelated with their KSU. In addition, a four-level progression scale of KSU was also developed, which categorized the preservice teachers into five proficiency groups.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"73 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1103/physrevphyseducres.20.010147
J. Caleb Speirs, MacKenzie R. Stetzer, Beth A. Lindsey
Over the course of the introductory calculus-based physics course, students are often expected to build conceptual understanding and develop and refine skills in problem solving and qualitative inferential reasoning. Many of the research-based materials developed over the past 30 years by the physics education research community use sequences of scaffolded questions to step students through a qualitative inferential reasoning chain. It is often tacitly assumed that, in addition to building conceptual understanding, such materials improve qualitative reasoning skills. However, clear documentation of the impact of such materials on qualitative reasoning skills is critical. New methodologies are needed to better study reasoning processes and to disentangle, to the extent possible, processes related to physics content from processes general to all human reasoning. As a result, we have employed network analysis methodologies to examine student responses to reasoning-related tasks in order to gain deeper insight into the nature of student reasoning in physics. In this paper, we show that network analysis metrics are both interpretable and valuable when applied to student reasoning data generated from reasoning chain construction tasks. We also demonstrate that documentation of improvements in the articulation of specific lines of reasoning can be obtained from a network analysis of responses to reasoning chain construction tasks.
{"title":"Utilizing network analysis to explore student qualitative inferential reasoning chains","authors":"J. Caleb Speirs, MacKenzie R. Stetzer, Beth A. Lindsey","doi":"10.1103/physrevphyseducres.20.010147","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010147","url":null,"abstract":"Over the course of the introductory calculus-based physics course, students are often expected to build conceptual understanding and develop and refine skills in problem solving and qualitative inferential reasoning. Many of the research-based materials developed over the past 30 years by the physics education research community use sequences of scaffolded questions to step students through a qualitative inferential reasoning chain. It is often tacitly assumed that, in addition to building conceptual understanding, such materials improve qualitative reasoning skills. However, clear documentation of the impact of such materials on qualitative reasoning skills is critical. New methodologies are needed to better study reasoning processes and to disentangle, to the extent possible, processes related to physics content from processes general to all human reasoning. As a result, we have employed network analysis methodologies to examine student responses to reasoning-related tasks in order to gain deeper insight into the nature of student reasoning in physics. In this paper, we show that network analysis metrics are both interpretable and valuable when applied to student reasoning data generated from <i>reasoning chain construction tasks</i>. We also demonstrate that documentation of improvements in the articulation of specific lines of reasoning can be obtained from a network analysis of responses to reasoning chain construction tasks.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"48 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1103/physrevphyseducres.20.010146
Jared P. Canright, Suzanne White Brahmia
[This paper is part of the Focused Collection on Instructional labs: Improving traditions and new directions.] We report on a study of the effects of laboratory activities that model fictitious laws of physics in a virtual reality environment on (i) students’ epistemology about the role of experimental physics in class and in the world; (ii) students’ self-efficacy; and (iii) the quality of student engagement with the lab activities. We create opportunities for students to practice physics as a means of creating and validating new knowledge by simulating real and fictitious physics in virtual reality (VR). This approach seeks to steer students away from a confirmation mindset in labs by eliminating any form of prior or outside models to confirm. We refer to the activities using this approach as Novel Observations in Mixed Reality (NOMR) labs. We examined NOMR’s effects in 100-level and 200-level undergraduate courses. Using pre-post measurements, we find that after NOMR labs, students in both populations were more expertlike in their epistemology about experimental physics and held stronger self-efficacy about their abilities to do the kinds of things experimental physicists do. Through the lens of the psychological theory of flow, we found that students engage as productively with NOMR labs as with traditional hands-on labs. This engagement persisted after the novelty of VR in the classroom wore off, suggesting that these effects were due to the pedagogical design rather than the medium of the intervention. We conclude that these NOMR labs offer an approach to physics laboratory instruction that centers the development of students’ understanding of and comfort with the authentic practice of science.
{"title":"Modeling novel physics in virtual reality labs: An affective analysis of student learning","authors":"Jared P. Canright, Suzanne White Brahmia","doi":"10.1103/physrevphyseducres.20.010146","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010146","url":null,"abstract":"[This paper is part of the Focused Collection on Instructional labs: Improving traditions and new directions.] We report on a study of the effects of laboratory activities that model fictitious laws of physics in a virtual reality environment on (i) students’ epistemology about the role of experimental physics in class and in the world; (ii) students’ self-efficacy; and (iii) the quality of student engagement with the lab activities. We create opportunities for students to practice physics as a means of creating and validating new knowledge by simulating real and fictitious physics in virtual reality (VR). This approach seeks to steer students away from a confirmation mindset in labs by eliminating any form of prior or outside models to confirm. We refer to the activities using this approach as Novel Observations in Mixed Reality (NOMR) labs. We examined NOMR’s effects in 100-level and 200-level undergraduate courses. Using pre-post measurements, we find that after NOMR labs, students in both populations were more expertlike in their epistemology about experimental physics and held stronger self-efficacy about their abilities to do the kinds of things experimental physicists do. Through the lens of the psychological theory of flow, we found that students engage as productively with NOMR labs as with traditional hands-on labs. This engagement persisted after the novelty of VR in the classroom wore off, suggesting that these effects were due to the pedagogical design rather than the medium of the intervention. We conclude that these NOMR labs offer an approach to physics laboratory instruction that centers the development of students’ understanding of and comfort with the authentic practice of science.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"68 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1103/physrevphyseducres.20.010145
Gerd Kortemeyer, Wolfgang Bauer
As a result of the pandemic, many physics courses moved online. Alongside, the popularity of Internet-based problem-solving sites and forums rose. With the emergence of large language models, another shift occurred. One year into the public availability of these models, how has online help-seeking behavior among introductory physics students changed, and what is the effect of different patterns of online resource usage? In a mixed-method approach, we investigate student choices and their impact on assessment components of an online introductory physics course for scientists and engineers. We find that students still mostly rely on traditional Internet resources and that their usage strongly influences the outcome of low-stake unsupervised quizzes. We empirically found distinct clusters of help-seeking and resource-usage patterns among the students; the impact of students’ cluster membership on the supervised assessment components of the course, however, is nonsignificant.
{"title":"Cheat sites and artificial intelligence usage in online introductory physics courses: What is the extent and what effect does it have on assessments?","authors":"Gerd Kortemeyer, Wolfgang Bauer","doi":"10.1103/physrevphyseducres.20.010145","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010145","url":null,"abstract":"As a result of the pandemic, many physics courses moved online. Alongside, the popularity of Internet-based problem-solving sites and forums rose. With the emergence of large language models, another shift occurred. One year into the public availability of these models, how has online help-seeking behavior among introductory physics students changed, and what is the effect of different patterns of online resource usage? In a mixed-method approach, we investigate student choices and their impact on assessment components of an online introductory physics course for scientists and engineers. We find that students still mostly rely on traditional Internet resources and that their usage strongly influences the outcome of low-stake unsupervised quizzes. We empirically found distinct clusters of help-seeking and resource-usage patterns among the students; the impact of students’ cluster membership on the supervised assessment components of the course, however, is nonsignificant.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"64 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141146617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-16DOI: 10.1103/physrevphyseducres.20.010143
Álvaro Suárez, Arturo C. Marti, Kristina Zuza, Jenaro Guisasola
We investigate learning difficulties among second-year students in electromagnetism courses when they apply Ampère-Maxwell’s law. Using phenomenography, we analyzed written answers from 65 undergraduate physics students to four questions on Ampère’s and Ampère-Maxwell’s laws. We complemented our research by interviewing 12 students. To design the questionnaire, we ran an epistemological analysis of classical electromagnetism which helped us to identify a set of key essential concepts to understand this theory, guided the definition of learning objectives, and drew up the questions. The results revealed that the students found it hard to recognize the validity framework from Ampère’s law and to apply Ampère-Maxwell’s law. They face particular difficulties to recognize the appearance of the displacement current and the relationship between the circulation of the magnetic field and an electric field that is variable over time.
{"title":"Learning difficulties among students when applying Ampére-Maxwell’s law and its implications for teaching","authors":"Álvaro Suárez, Arturo C. Marti, Kristina Zuza, Jenaro Guisasola","doi":"10.1103/physrevphyseducres.20.010143","DOIUrl":"https://doi.org/10.1103/physrevphyseducres.20.010143","url":null,"abstract":"We investigate learning difficulties among second-year students in electromagnetism courses when they apply Ampère-Maxwell’s law. Using phenomenography, we analyzed written answers from 65 undergraduate physics students to four questions on Ampère’s and Ampère-Maxwell’s laws. We complemented our research by interviewing 12 students. To design the questionnaire, we ran an epistemological analysis of classical electromagnetism which helped us to identify a set of key essential concepts to understand this theory, guided the definition of learning objectives, and drew up the questions. The results revealed that the students found it hard to recognize the validity framework from Ampère’s law and to apply Ampère-Maxwell’s law. They face particular difficulties to recognize the appearance of the displacement current and the relationship between the circulation of the magnetic field and an electric field that is variable over time.","PeriodicalId":54296,"journal":{"name":"Physical Review Physics Education Research","volume":"52 3 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}