Pub Date : 2025-03-01Epub Date: 2024-11-27DOI: 10.1152/advan.00185.2024
Heidi L Lujan, Stephen E DiCarlo
John Wooden, the legendary University of California, Los Angeles (UCLA) basketball coach, consistently emphasized the distinction between winning and true success. For Wooden, success was not about defeating others or standing out in a competition but about personal growth, self-improvement, and the pursuit of excellence. His philosophy offers a powerful lesson, particularly for educators, as they guide students through their academic journeys. Wooden's message highlights the importance of fostering an environment where success is measured by effort and progress, not merely by grades or test scores. Unfortunately, many educators seem to overlook this, focusing heavily on grades as the primary measure of achievement. By placing such a strong emphasis on grades, teachers inadvertently create a culture where students begin to equate their self-worth with their performance on a test. This not only diminishes the value of personal growth but also fosters anxiety and discouragement among students who may struggle academically. Students may begin to fear being wrong and avoid challenges and opportunities. This limits creativity and the chance to learn, grow, and contribute to society. Wooden's wisdom reminds us that educators have the power to influence how students perceive success. By encouraging a more holistic view of achievement, one that values hard work, resilience, and continuous improvement, teachers can help students develop a healthier, more positive understanding of what it means to succeed. In today's educational system, this shift is crucial, as too many students are being taught to see their value solely in terms of grades, rather than their personal and intellectual growth.
{"title":"Students are more than their scores: educators have the power to change how students perceive success.","authors":"Heidi L Lujan, Stephen E DiCarlo","doi":"10.1152/advan.00185.2024","DOIUrl":"10.1152/advan.00185.2024","url":null,"abstract":"<p><p>John Wooden, the legendary University of California, Los Angeles (UCLA) basketball coach, consistently emphasized the distinction between winning and true success. For Wooden, success was not about defeating others or standing out in a competition but about personal growth, self-improvement, and the pursuit of excellence. His philosophy offers a powerful lesson, particularly for educators, as they guide students through their academic journeys. Wooden's message highlights the importance of fostering an environment where success is measured by effort and progress, not merely by grades or test scores. Unfortunately, many educators seem to overlook this, focusing heavily on grades as the primary measure of achievement. By placing such a strong emphasis on grades, teachers inadvertently create a culture where students begin to equate their self-worth with their performance on a test. This not only diminishes the value of personal growth but also fosters anxiety and discouragement among students who may struggle academically. Students may begin to fear being wrong and avoid challenges and opportunities. This limits creativity and the chance to learn, grow, and contribute to society. Wooden's wisdom reminds us that educators have the power to influence how students perceive success. By encouraging a more holistic view of achievement, one that values hard work, resilience, and continuous improvement, teachers can help students develop a healthier, more positive understanding of what it means to succeed. In today's educational system, this shift is crucial, as too many students are being taught to see their value solely in terms of grades, rather than their personal and intellectual growth.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":"93-95"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142734239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-03DOI: 10.1152/advan.00188.2024
Himel Mondal
The integration of large language models (LLMs) in medical education offers both opportunities and challenges. While these artificial intelligence (AI)-driven tools can enhance access to information and support critical thinking, they also pose risks like potential overreliance and ethical concerns. To ensure ethical use, students and instructors must recognize the limitations of LLMs, maintain academic integrity, and handle data cautiously, and instructors should prioritize content quality over AI detection methods. LLMs can be used as supplementary aids rather than primary educational resources, with a focus on enhancing accessibility and equity and fostering a culture of feedback. Institutions should create guidelines that align with their unique educational values, providing clear frameworks that support responsible LLM usage while addressing risks associated with AI in education. Such guidelines should reflect the institution's pedagogical mission, whether centered on clinical practice, research, or a mix of both, and should be adaptable to evolving educational technologies.
{"title":"Ethical engagement with artificial intelligence in medical education.","authors":"Himel Mondal","doi":"10.1152/advan.00188.2024","DOIUrl":"10.1152/advan.00188.2024","url":null,"abstract":"<p><p>The integration of large language models (LLMs) in medical education offers both opportunities and challenges. While these artificial intelligence (AI)-driven tools can enhance access to information and support critical thinking, they also pose risks like potential overreliance and ethical concerns. To ensure ethical use, students and instructors must recognize the limitations of LLMs, maintain academic integrity, and handle data cautiously, and instructors should prioritize content quality over AI detection methods. LLMs can be used as supplementary aids rather than primary educational resources, with a focus on enhancing accessibility and equity and fostering a culture of feedback. Institutions should create guidelines that align with their unique educational values, providing clear frameworks that support responsible LLM usage while addressing risks associated with AI in education. Such guidelines should reflect the institution's pedagogical mission, whether centered on clinical practice, research, or a mix of both, and should be adaptable to evolving educational technologies.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":"163-165"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-24DOI: 10.1152/advan.00171.2024
Beth Beason-Abmayr, David R Caprette
We present an alternative to the traditional classroom lecture on the topics of metabolic scaling, allometric relationships between metabolic rate (MR) and body size, and reasons for rejecting Rubner''s surface "law," concepts that students have described as challenging, counterintuitive, and/or mathematical. In groups, students work with published data on MR and body size for species representing all five vertebrate groups. To support the exercise, we developed a worksheet that has students define the concept in their own words, compare different measures of MR, and evaluate plots of MR and mass-specific MR versus body mass for both homeotherms and poikilotherms. Students also attempt to explain why selected species have exceptionally high or low MR values for their body sizes. Student feedback indicated active learning is an effective way to learn the concepts of metabolic scaling and allometric relationships and that the opportunity to work in groups with real data stimulates interest and an appreciation for the importance of metabolic scaling to the understanding of animal physiology.
{"title":"Metabolic Scaling: Exploring the Relation Between Metabolic Rate and Body Size.","authors":"Beth Beason-Abmayr, David R Caprette","doi":"10.1152/advan.00171.2024","DOIUrl":"https://doi.org/10.1152/advan.00171.2024","url":null,"abstract":"<p><p>We present an alternative to the traditional classroom lecture on the topics of metabolic scaling, allometric relationships between metabolic rate (MR) and body size, and reasons for rejecting Rubner''s surface \"law,\" concepts that students have described as challenging, counterintuitive, and/or mathematical. In groups, students work with published data on MR and body size for species representing all five vertebrate groups. To support the exercise, we developed a worksheet that has students define the concept in their own words, compare different measures of MR, and evaluate plots of MR and mass-specific MR versus body mass for both homeotherms and poikilotherms. Students also attempt to explain why selected species have exceptionally high or low MR values for their body sizes. Student feedback indicated active learning is an effective way to learn the concepts of metabolic scaling and allometric relationships and that the opportunity to work in groups with real data stimulates interest and an appreciation for the importance of metabolic scaling to the understanding of animal physiology.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143034753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-19DOI: 10.1152/advan.00233.2024
Saewon Chun, Cindy Liang, Charity Thomann, Shaimaa Amin, Christina Trinh, Camila Araujo, Sherif S Hassan
Introduction: Medical schools were incorporating active learning strategies in anatomy teaching to accommodate diverse student bodies. Formative assessment and art as a hands-on learning method had been explored as alternatives to traditional teaching methods. Those methods allowed students to practice and assess their understanding of anatomy as they progress. The current study investigated the effectiveness of "Art in Anatomy" lab sessions in enhancing preclerkship medical students' comprehension of challenging anatomical topics and determining if differences were related to their year in medical school.
Methods: This study involved 41 pre-clinical year medical students at CUSM-SOM who participated in "Art in Anatomy" sessions. Results showed pre-session and post-session quiz scores, with differences calculated for first-year and second-year medical students.
Results: The study revealed a significant skew in pre- and post-session data, with Year 2 students showing lower mean and smaller range on pre-session quiz scores. Post-session quiz scores showed higher mean and median scores but reversed on post-session. Both Year 1 and Year 2 students showed improved scores, with 68% experiencing a score increase of 0, 1, 3, or 4 points, and 32% experiencing a 2-point increase.
Conclusions and future directions: Art in Anatomy sessions could effectively support medical students in learning human anatomy during pre-clerkship years. The method provided formative feedback, aiding immediate recall of anatomical knowledge. Future research should explore different art forms and correlate post-session quiz scores with other students' exam scores, such as End of Course, NBME, and practical exam scores.
{"title":"Art in Anatomy Session as a Method of Formative Feedback in Pre-Clerkship Medical Education.","authors":"Saewon Chun, Cindy Liang, Charity Thomann, Shaimaa Amin, Christina Trinh, Camila Araujo, Sherif S Hassan","doi":"10.1152/advan.00233.2024","DOIUrl":"https://doi.org/10.1152/advan.00233.2024","url":null,"abstract":"<p><strong>Introduction: </strong>Medical schools were incorporating active learning strategies in anatomy teaching to accommodate diverse student bodies. Formative assessment and art as a hands-on learning method had been explored as alternatives to traditional teaching methods. Those methods allowed students to practice and assess their understanding of anatomy as they progress. The current study investigated the effectiveness of \"Art in Anatomy\" lab sessions in enhancing preclerkship medical students' comprehension of challenging anatomical topics and determining if differences were related to their year in medical school.</p><p><strong>Methods: </strong>This study involved 41 pre-clinical year medical students at CUSM-SOM who participated in \"Art in Anatomy\" sessions. Results showed pre-session and post-session quiz scores, with differences calculated for first-year and second-year medical students.</p><p><strong>Results: </strong>The study revealed a significant skew in pre- and post-session data, with Year 2 students showing lower mean and smaller range on pre-session quiz scores. Post-session quiz scores showed higher mean and median scores but reversed on post-session. Both Year 1 and Year 2 students showed improved scores, with 68% experiencing a score increase of 0, 1, 3, or 4 points, and 32% experiencing a 2-point increase.</p><p><strong>Conclusions and future directions: </strong>Art in Anatomy sessions could effectively support medical students in learning human anatomy during pre-clerkship years. The method provided formative feedback, aiding immediate recall of anatomical knowledge. Future research should explore different art forms and correlate post-session quiz scores with other students' exam scores, such as End of Course, NBME, and practical exam scores.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1152/advan.00142.2024
Ursula Holzmann, Sulekha Anand, Alexander Y Payumo
Generative large language models (LLMs) like ChatGPT can quickly produce informative essays on various topics. However, the information generated cannot be fully trusted as artificial intelligence (AI) can make factual mistakes. This poses challenges for using such tools in college classrooms. To address this, an adaptable assignment called the ChatGPT Fact-Check was developed to teach students in college science courses the benefits of using LLMs for topic exploration while emphasizing the importance of validating its claims based on evidence. The assignment requires students to use ChatGPT to generate essays, evaluate AI-generated sources, and assess the validity of AI-generated scientific claims (based on experimental evidence in primary sources). The assignment reinforces student learning around responsible AI use for exploration while maintaining evidence-based skepticism. The assignment meets objectives around efficiently leveraging beneficial features of AI, distinguishing evidence types, and evidence-based claim evaluation. Its adaptable nature allows integration across diverse courses to teach students to responsibly use AI for learning while maintaining a critical stance.
{"title":"The ChatGPT Fact-Check: Exploiting the Limitations of Generative AI to Develop Evidenced-Based Reasoning Skills in College Science Courses.","authors":"Ursula Holzmann, Sulekha Anand, Alexander Y Payumo","doi":"10.1152/advan.00142.2024","DOIUrl":"https://doi.org/10.1152/advan.00142.2024","url":null,"abstract":"<p><p>Generative large language models (LLMs) like ChatGPT can quickly produce informative essays on various topics. However, the information generated cannot be fully trusted as artificial intelligence (AI) can make factual mistakes. This poses challenges for using such tools in college classrooms. To address this, an adaptable assignment called the ChatGPT Fact-Check was developed to teach students in college science courses the benefits of using LLMs for topic exploration while emphasizing the importance of validating its claims based on evidence. The assignment requires students to use ChatGPT to generate essays, evaluate AI-generated sources, and assess the validity of AI-generated scientific claims (based on experimental evidence in primary sources). The assignment reinforces student learning around responsible AI use for exploration while maintaining evidence-based skepticism. The assignment meets objectives around efficiently leveraging beneficial features of AI, distinguishing evidence types, and evidence-based claim evaluation. Its adaptable nature allows integration across diverse courses to teach students to responsibly use AI for learning while maintaining a critical stance.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1152/advan.00096.2024
Rikke Petersen, Mie Feldfoss Nørremark, Nils Færgeman
Here we describe an approach and overall concept on how to train undergraduate university students to understand basic regulation and integration of glucose and fatty acid metabolism in response to fasting, intake of carbohydrates and aerobic exercise. During lectures and both theoretical and practical sessions, the students read, analyse, and discuss the fundamentals of Randle cycle. They focus on how metabolism is regulated in adipose tissue, skeletal muscle, and liver at a molecular level under various metabolic conditions. Subsequently, students perform one of four different trials: 1) overnight fast followed by ingestion of jelly sandwiches and lemonade ad libitum; 2) overnight fast followed by ingestion of a chocolate bar and a soda; 3) overnight fast followed by ingestion of carrots and 4) light fast and aerobic exercise for 2 hours, while monitoring glucose- and fatty acid levels. The data from these trials clearly show that glucose levels are kept constant around 5 mM while fatty acid levels raise to 300-700 mM, after an overnight fast. Upon carbohydrate intake, glucose levels increase whereas fatty acid levels are reduced. In response to aerobic exercise, the glucose level is kept constant at 5 mM, while fatty acids levels increase over time. Collectively, the data clearly recapitulates the essence of Randle cycle. The exercise shows the great pedagogical value of experiments within practical courses to help students gain knowledge of energy metabolism and regulation of biochemical pathways. In an active learning environment, students successfully tackled physiological assignments, enhancing constructive communication and collaboration among peers.
{"title":"Randle Cycle in Practice: a student exercise to teach glucose- and fatty acid metabolism in fasted, fed and exercised states.","authors":"Rikke Petersen, Mie Feldfoss Nørremark, Nils Færgeman","doi":"10.1152/advan.00096.2024","DOIUrl":"https://doi.org/10.1152/advan.00096.2024","url":null,"abstract":"<p><p>Here we describe an approach and overall concept on how to train undergraduate university students to understand basic regulation and integration of glucose and fatty acid metabolism in response to fasting, intake of carbohydrates and aerobic exercise. During lectures and both theoretical and practical sessions, the students read, analyse, and discuss the fundamentals of Randle cycle. They focus on how metabolism is regulated in adipose tissue, skeletal muscle, and liver at a molecular level under various metabolic conditions. Subsequently, students perform one of four different trials: <i>1</i>) overnight fast followed by ingestion of jelly sandwiches and lemonade <i>ad libitum</i>; <i>2</i>) overnight fast followed by ingestion of a chocolate bar and a soda; 3) overnight fast followed by ingestion of carrots and <i>4</i>) light fast and aerobic exercise for 2 hours, while monitoring glucose- and fatty acid levels. The data from these trials clearly show that glucose levels are kept constant around 5 mM while fatty acid levels raise to 300-700 mM, after an overnight fast. Upon carbohydrate intake, glucose levels increase whereas fatty acid levels are reduced. In response to aerobic exercise, the glucose level is kept constant at 5 mM, while fatty acids levels increase over time. Collectively, the data clearly recapitulates the essence of Randle cycle. The exercise shows the great pedagogical value of experiments within practical courses to help students gain knowledge of energy metabolism and regulation of biochemical pathways. In an active learning environment, students successfully tackled physiological assignments, enhancing constructive communication and collaboration among peers.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1152/advan.00093.2024
Volodymyr Mavrych, Ahmed Yaqinuddin, Olena Bolgova
Despite extensive studies on large language models and their capability to respond to questions from various licensed exams, there has been limited focus on employing chatbots for specific subjects within the medical curriculum, specifically medical neuroscience. This research compared the performances of Claude 3.5 Sonnet (Anthropic), GPT-3.5, GPT-4-1106 (OpenAI), Copilot free version (Microsoft), and Gemini 1.5 Flash (Google) versus students on MCQs from the medical neuroscience course database to evaluate chatbots reliability. 5 successive attempts of each chatbot to answer 200 USMLE-style questions were evaluated based on accuracy, relevance, and comprehensiveness. MCQs were categorized into 12 categories/topics. The results indicated that at the current level of development, selected AI-driven chatbots, on average, can accurately answer 67.2% of MCQs from the medical neuroscience course, which is 7.4% below the students' average. However, Claude and GPT-4 outperformed other chatbots with 83% and 81.7% correct answers, which is better than the average student result. They followed by Copilot - 59.5%, GPT-3.5 - 58.3%, and Gemini - 53.6%. Concerning different categories, Neurocytology, Embryology, and Diencephalon were the three best topics, with average results of 78.1% - 86.7%, and the lowest results were Brainstem, Special senses, and Cerebellum, with 54.4% - 57.7% correct answers. Our study suggested that Claude and GPT-4 are currently two of the most evolved chatbots. They exhibit proficiency in answering MCQs related to neuroscience that surpasses that of the average medical student. This breakthrough indicates a significant milestone in how AI can supplement and enhance educational tools and techniques.
{"title":"Claude, ChatGPT, Copilot, and Gemini Performance versus Students in Different Topics of Neuroscience.","authors":"Volodymyr Mavrych, Ahmed Yaqinuddin, Olena Bolgova","doi":"10.1152/advan.00093.2024","DOIUrl":"https://doi.org/10.1152/advan.00093.2024","url":null,"abstract":"<p><p>Despite extensive studies on large language models and their capability to respond to questions from various licensed exams, there has been limited focus on employing chatbots for specific subjects within the medical curriculum, specifically medical neuroscience. This research compared the performances of Claude 3.5 Sonnet (Anthropic), GPT-3.5, GPT-4-1106 (OpenAI), Copilot free version (Microsoft), and Gemini 1.5 Flash (Google) versus students on MCQs from the medical neuroscience course database to evaluate chatbots reliability. 5 successive attempts of each chatbot to answer 200 USMLE-style questions were evaluated based on accuracy, relevance, and comprehensiveness. MCQs were categorized into 12 categories/topics. The results indicated that at the current level of development, selected AI-driven chatbots, on average, can accurately answer 67.2% of MCQs from the medical neuroscience course, which is 7.4% below the students' average. However, Claude and GPT-4 outperformed other chatbots with 83% and 81.7% correct answers, which is better than the average student result. They followed by Copilot - 59.5%, GPT-3.5 - 58.3%, and Gemini - 53.6%. Concerning different categories, Neurocytology, Embryology, and Diencephalon were the three best topics, with average results of 78.1% - 86.7%, and the lowest results were Brainstem, Special senses, and Cerebellum, with 54.4% - 57.7% correct answers. Our study suggested that Claude and GPT-4 are currently two of the most evolved chatbots. They exhibit proficiency in answering MCQs related to neuroscience that surpasses that of the average medical student. This breakthrough indicates a significant milestone in how AI can supplement and enhance educational tools and techniques.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1152/advan.00168.2024
C Jynx Pigart, Tasneem Mohammed, Theresa Acuna, Shurelia Baltazar, Connor Bean, Michayla Hart, Katelyn Huizenga, Amaris James, Hayleigh Shaw, Kimberly Zsuffa, Carly A Busch, Katelyn M Cooper
Academic stress is one of the primary factors threatening university students' well-being and performance. Undergraduate students who are working towards applying to medical school, defined as being on the pre-medicine or "premed" pathway, are suspected to have higher academic stress compared to their peers who are not premed. However, what factors contribute to academic stress for premed students is not well understood. We sought to answer: Do undergraduates perceive premeds have higher, same, or lower stress than non-premeds? How do academic stress levels between these groups actually differ? What aspects of being premed cause academic stress? and Who has left the premed track and why? We surveyed 551 undergraduates from one large institution in the U.S., and answered our research questions using descriptive statistics, chi-squares, and linear regressions. Overwhelmingly, participants perceived that premed students experience greater academic stress than their counterparts. Yet, we found no significant differences in academic stress reported among students in our sample (p > 0.05). Premed students reported their academic stress was exacerbated by not feeling competitive enough to get into medical school and by needing to maintain a high GPA. Further, students with lower GPAs were more likely to leave the premed track compared to those with higher GPAs (p = 0.005). Students reported leaving the premed track because another career appeared more interesting and because of the toll the premed track took on their mental health. In conclusion, our findings can inform instructors and universities on how to best support premed students.
{"title":"Premed Pressure: Examining whether premed students experience more academic stress compared to non-premeds.","authors":"C Jynx Pigart, Tasneem Mohammed, Theresa Acuna, Shurelia Baltazar, Connor Bean, Michayla Hart, Katelyn Huizenga, Amaris James, Hayleigh Shaw, Kimberly Zsuffa, Carly A Busch, Katelyn M Cooper","doi":"10.1152/advan.00168.2024","DOIUrl":"https://doi.org/10.1152/advan.00168.2024","url":null,"abstract":"<p><p>Academic stress is one of the primary factors threatening university students' well-being and performance. Undergraduate students who are working towards applying to medical school, defined as being on the pre-medicine or \"premed\" pathway, are suspected to have higher academic stress compared to their peers who are not premed. However, what factors contribute to academic stress for premed students is not well understood. We sought to answer: Do undergraduates perceive premeds have higher, same, or lower stress than non-premeds? How do academic stress levels between these groups actually differ? What aspects of being premed cause academic stress? and Who has left the premed track and why? We surveyed 551 undergraduates from one large institution in the U.S., and answered our research questions using descriptive statistics, chi-squares, and linear regressions. Overwhelmingly, participants perceived that premed students experience greater academic stress than their counterparts. Yet, we found no significant differences in academic stress reported among students in our sample (p > 0.05). Premed students reported their academic stress was exacerbated by not feeling competitive enough to get into medical school and by needing to maintain a high GPA. Further, students with lower GPAs were more likely to leave the premed track compared to those with higher GPAs (p = 0.005). Students reported leaving the premed track because another career appeared more interesting and because of the toll the premed track took on their mental health. In conclusion, our findings can inform instructors and universities on how to best support premed students.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1152/advan.00213.2024
Shivani Gupta, Aliya Centner, Nidhi Patel, Jonathan Kibble
While trust is an essential resource in successful social exchanges, the basis of trust in the student-professor relationship in higher education has not been extensively studied. The purpose of the present study was to gain a better understanding of how trust is developed within a medical school learning environment. To that end, we applied a qualitative approach using semi-structured interviews. Interview guides were developed based on the leading model of organizational trust, which posits that trustworthiness can be modeled based on three factors of a trustee, namely their perceived ability, benevolence and integrity. Eleven faculty members and 11 medical students in their core clerkships agreed to participate, providing in-depth viewpoints that were transcribed verbatim for thematic analysis. Faculty interviews sought to develop a model describing how trust develops in the medical school learning environment and student interviews interrogated how faculty performed within each trust domain to corroborate best practices. The research team applied interpretive-phenomenological analysis to develop consensus around the key themes. Arising from the data we propose a model showing how faculty demonstrate their ability, benevolence and integrity to learners as well as features of a learning environment that promote trust, including positive student traits. Finally, we recommend a series of best practices for faculty wishing to develop a trusting learning climate.
{"title":"Investigating the nature of trust in the medical student-professor relationship: an interview study.","authors":"Shivani Gupta, Aliya Centner, Nidhi Patel, Jonathan Kibble","doi":"10.1152/advan.00213.2024","DOIUrl":"https://doi.org/10.1152/advan.00213.2024","url":null,"abstract":"<p><p>While trust is an essential resource in successful social exchanges, the basis of trust in the student-professor relationship in higher education has not been extensively studied. The purpose of the present study was to gain a better understanding of how trust is developed within a medical school learning environment. To that end, we applied a qualitative approach using semi-structured interviews. Interview guides were developed based on the leading model of organizational trust, which posits that trustworthiness can be modeled based on three factors of a trustee, namely their perceived ability, benevolence and integrity. Eleven faculty members and 11 medical students in their core clerkships agreed to participate, providing in-depth viewpoints that were transcribed verbatim for thematic analysis. Faculty interviews sought to develop a model describing how trust develops in the medical school learning environment and student interviews interrogated how faculty performed within each trust domain to corroborate best practices. The research team applied interpretive-phenomenological analysis to develop consensus around the key themes. Arising from the data we propose a model showing how faculty demonstrate their ability, benevolence and integrity to learners as well as features of a learning environment that promote trust, including positive student traits. Finally, we recommend a series of best practices for faculty wishing to develop a trusting learning climate.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1152/advan.00159.2024
Gregory J Crowther, Merrill D Funk, Kelly M Hennessey, Marcus M Lawrence
Learning Objectives (LOs) are a pillar of course design and execution, and thus a focus of curricular reforms. This study explored the extent to which the creation and usage of LOs might be facilitated by three leading chatbots: ChatGPT-4o, Claude 3.5 Sonnet, and Google Gemini Advanced. We posed three main questions, as follows. Question A: When given course content, can chatbots create LOs that are consistent with five best practices in writing LOs? Question B: When given LOs for a low level of the Revised Bloom's Taxonomy, can chatbots convert them to a higher level? Question C: When given LOs, can chatbots create assessment questions that meet six criteria of quality? We explored these questions in the context of four undergraduate courses: Applied Exercise Physiology, Human Anatomy, Human Physiology, and Motor Learning. According to instructor ratings, chatbots had a >70% success rate on most individual criteria for Questions A-C. However, chatbots' "difficulties" with a few criteria (e.g., provision of appropriate context for an LO's action, assignment of an appropriate Revised Bloom's taxonomy level) meant that, overall, only 38.3% of chatbot outputs fully met all criteria and thus were possibly ready for use with students. Our findings thus underscore the continuing need for instructor oversight of chatbot outputs, but also illustrate chatbots' potential to expedite the design and improvement of LOs and LO-related curricular materials such as Test Question Templates (TQTs), which directly align LOs with assessment questions.
{"title":"Frontier-Model Chatbots Can Help Instructors Create, Improve, and Use Learning Objectives.","authors":"Gregory J Crowther, Merrill D Funk, Kelly M Hennessey, Marcus M Lawrence","doi":"10.1152/advan.00159.2024","DOIUrl":"https://doi.org/10.1152/advan.00159.2024","url":null,"abstract":"<p><p>Learning Objectives (LOs) are a pillar of course design and execution, and thus a focus of curricular reforms. This study explored the extent to which the creation and usage of LOs might be facilitated by three leading chatbots: ChatGPT-4o, Claude 3.5 Sonnet, and Google Gemini Advanced. We posed three main questions, as follows. Question A: When given course content, can chatbots create LOs that are consistent with five best practices in writing LOs? Question B: When given LOs for a low level of the Revised Bloom's Taxonomy, can chatbots convert them to a higher level? Question C: When given LOs, can chatbots create assessment questions that meet six criteria of quality? We explored these questions in the context of four undergraduate courses: Applied Exercise Physiology, Human Anatomy, Human Physiology, and Motor Learning. According to instructor ratings, chatbots had a >70% success rate on most individual criteria for Questions A-C. However, chatbots' \"difficulties\" with a few criteria (e.g., provision of appropriate context for an LO's action, assignment of an appropriate Revised Bloom's taxonomy level) meant that, overall, only 38.3% of chatbot outputs fully met all criteria and thus were possibly ready for use with students. Our findings thus underscore the continuing need for instructor oversight of chatbot outputs, but also illustrate chatbots' potential to expedite the design and improvement of LOs and LO-related curricular materials such as Test Question Templates (TQTs), which directly align LOs with assessment questions.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}