Pub Date : 2025-03-01DOI: 10.1016/j.jsurg.2025.103477
Carly Chappell, Stephen Markowiak, Gang Ren, Laura Wharry, Stephen Stanek, Joseph Sferra
Purpose: Residents at our training program identified timeliness of faculty feedback as an area for improvement, while faculty felt the evaluation tool was time-consuming and redundant. We sought to resolve these issues through stakeholder input and modern data analysis.
Methods: Core faculty and senior residents met to revamp the "Faculty Evaluation of Resident" end of rotation tool. The results of the most recent 5-years of evaluations were analyzed using Exploratory Factor Analysis, a dimension reduction tool provided in the IBM SPSS statistical package, to identify questions which were highly correlated. A new condensed tool was then generated by combining highly correlated questions with committee approval. Spearman's rank order correlation test was used to evaluate each new question versus the eliminated redundant questions. Time to survey completion and frequency of written feedback were compared using t-test. One-way ANOVA was then used to compare scores for each new question versus the eliminated questions that had been grouped together.
Results: 3,268 surveys were completed by 73 attendings regarding 55 resident subjects. Data were blinded by the program coordinator before analysis. Exploratory Factor Analysis indicated the initial 30-question instrument could be reduced to 12-questions, while retaining 96% of the variability in performance. The component matrix indicates that 4 areas accounted for the most variability in resident performance: Overall Performance, Communication, Operative Skill, and Systems Based Practice. Following implementation of the new evaluation form, attending surgeons completed resident evaluations at a median of 36 days after the rotation (IQR 18-59). Faculty left written feedback more frequently (53.4% vs 40.8%, p < 0.0001). For some new questions, the resident performances were statistically different. For example, new question 9 had an average rating of 3.55 out of 5.00, while the questions it replaced averaged 3.76-3.89 (p < 0.001). For other questions, no statistically significant difference was found. For example, new question 12 and the questions it replaced all averaged 3.75 out of 5.00 (p = 0.985). Survey response rates also improved from 35% to 76% at 2 months and 89.3% to 93.5% at 6 months.
Conclusions: Faculty input and advanced statistical analysis shortened a 30-question resident evaluation tool to 12-questions while retaining 96% of the variability in resident performance. The new instrument resulted in improved response rate and increased number of written comments from attendings. Application of Exploratory Factor Analysis to resident education represents novel use of this tool in surgical education.
{"title":"Redesign of a Resident Evaluation Tool Using Exploratory Factor Analysis.","authors":"Carly Chappell, Stephen Markowiak, Gang Ren, Laura Wharry, Stephen Stanek, Joseph Sferra","doi":"10.1016/j.jsurg.2025.103477","DOIUrl":"https://doi.org/10.1016/j.jsurg.2025.103477","url":null,"abstract":"<p><strong>Purpose: </strong>Residents at our training program identified timeliness of faculty feedback as an area for improvement, while faculty felt the evaluation tool was time-consuming and redundant. We sought to resolve these issues through stakeholder input and modern data analysis.</p><p><strong>Methods: </strong>Core faculty and senior residents met to revamp the \"Faculty Evaluation of Resident\" end of rotation tool. The results of the most recent 5-years of evaluations were analyzed using Exploratory Factor Analysis, a dimension reduction tool provided in the IBM SPSS statistical package, to identify questions which were highly correlated. A new condensed tool was then generated by combining highly correlated questions with committee approval. Spearman's rank order correlation test was used to evaluate each new question versus the eliminated redundant questions. Time to survey completion and frequency of written feedback were compared using t-test. One-way ANOVA was then used to compare scores for each new question versus the eliminated questions that had been grouped together.</p><p><strong>Results: </strong>3,268 surveys were completed by 73 attendings regarding 55 resident subjects. Data were blinded by the program coordinator before analysis. Exploratory Factor Analysis indicated the initial 30-question instrument could be reduced to 12-questions, while retaining 96% of the variability in performance. The component matrix indicates that 4 areas accounted for the most variability in resident performance: Overall Performance, Communication, Operative Skill, and Systems Based Practice. Following implementation of the new evaluation form, attending surgeons completed resident evaluations at a median of 36 days after the rotation (IQR 18-59). Faculty left written feedback more frequently (53.4% vs 40.8%, p < 0.0001). For some new questions, the resident performances were statistically different. For example, new question 9 had an average rating of 3.55 out of 5.00, while the questions it replaced averaged 3.76-3.89 (p < 0.001). For other questions, no statistically significant difference was found. For example, new question 12 and the questions it replaced all averaged 3.75 out of 5.00 (p = 0.985). Survey response rates also improved from 35% to 76% at 2 months and 89.3% to 93.5% at 6 months.</p><p><strong>Conclusions: </strong>Faculty input and advanced statistical analysis shortened a 30-question resident evaluation tool to 12-questions while retaining 96% of the variability in resident performance. The new instrument resulted in improved response rate and increased number of written comments from attendings. Application of Exploratory Factor Analysis to resident education represents novel use of this tool in surgical education.</p>","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":" ","pages":"103477"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143538271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-15DOI: 10.1016/j.jsurg.2025.103463
Nicole E Brooks, Judith C French, Jeremy M Lipman, Ajita S Prabhu
Objective: Compare scoring outcomes between interviewers blinded to scores/grades/MSPE and those with the full applicant file to evaluate the effect of blinding on interview scores and ensure applicants can be confidently evaluated when blinding is used.
Design, setting and participants: Nineteen interviewers were purposively randomized to receive a complete application or file with all information except applicant grades/MSPE/USMLE score(s) of 90 applicants prior to 218 interviews during 2022 to 2023 general surgery recruitment. Blinding was randomly assigned while ensuring blinded and nonblinded interviews for both interviewers and applicants. Two program leaders involved in study implementation were excluded from blinding. All other aspects of the selection process remained unchanged from historic methods. Each applicant had 3 to 4 interviews. Each interview was scored prior to discussion with other faculty using a 10-point scale. Descriptive and univariate statistics analyzed scoring patterns. Qualitative data regarding the experiences of blinded interviewers was analyzed to generate themes.
Results: There were no differences in interview scores or difference from the applicants' mean scores between blinding groups. This remained true for within-applicant analyses and for all but 1 interviewer (95%) for within-interviewer analyses. Between-interviewer score differences were seen for interview scores across all interviewers and when comparing nonblinded vs. nonblinded scores across interviewers, but not when comparing blinded vs. blinded scores across interviewers. Qualitative data support the ability to confidently evaluate interview performance when blinded, frequent practice of "self-blinding" to limit bias even when given scores/grades/MSPE, and belief that scores/grades/MSPE are relevant for screening, but the interview has separate priorities.
Conclusions: Blinding of interviewers to scores/grades/MSPE did not significantly change interview scoring outcomes. Interviewer experiences support the ability to confidently evaluate interview performance when blinded. Given that negative effects of blinding were not found and prior work supports that bias may be mitigated by blinded interviews, we support its use in residency recruitment.
{"title":"To Blind or Not to Blind: Evaluating the Impact of Withholding Scores and Grades From Interviewers in General Surgery Resident Recruitment.","authors":"Nicole E Brooks, Judith C French, Jeremy M Lipman, Ajita S Prabhu","doi":"10.1016/j.jsurg.2025.103463","DOIUrl":"https://doi.org/10.1016/j.jsurg.2025.103463","url":null,"abstract":"<p><strong>Objective: </strong>Compare scoring outcomes between interviewers blinded to scores/grades/MSPE and those with the full applicant file to evaluate the effect of blinding on interview scores and ensure applicants can be confidently evaluated when blinding is used.</p><p><strong>Design, setting and participants: </strong>Nineteen interviewers were purposively randomized to receive a complete application or file with all information except applicant grades/MSPE/USMLE score(s) of 90 applicants prior to 218 interviews during 2022 to 2023 general surgery recruitment. Blinding was randomly assigned while ensuring blinded and nonblinded interviews for both interviewers and applicants. Two program leaders involved in study implementation were excluded from blinding. All other aspects of the selection process remained unchanged from historic methods. Each applicant had 3 to 4 interviews. Each interview was scored prior to discussion with other faculty using a 10-point scale. Descriptive and univariate statistics analyzed scoring patterns. Qualitative data regarding the experiences of blinded interviewers was analyzed to generate themes.</p><p><strong>Results: </strong>There were no differences in interview scores or difference from the applicants' mean scores between blinding groups. This remained true for within-applicant analyses and for all but 1 interviewer (95%) for within-interviewer analyses. Between-interviewer score differences were seen for interview scores across all interviewers and when comparing nonblinded vs. nonblinded scores across interviewers, but not when comparing blinded vs. blinded scores across interviewers. Qualitative data support the ability to confidently evaluate interview performance when blinded, frequent practice of \"self-blinding\" to limit bias even when given scores/grades/MSPE, and belief that scores/grades/MSPE are relevant for screening, but the interview has separate priorities.</p><p><strong>Conclusions: </strong>Blinding of interviewers to scores/grades/MSPE did not significantly change interview scoring outcomes. Interviewer experiences support the ability to confidently evaluate interview performance when blinded. Given that negative effects of blinding were not found and prior work supports that bias may be mitigated by blinded interviews, we support its use in residency recruitment.</p>","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":" ","pages":"103463"},"PeriodicalIF":0.0,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143434757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-16DOI: 10.1016/j.jsurg.2024.04.007
Shwetha Mudalegundi, Marisa Clifton, Scott Lifchez, Dawn LaPorte, Saras Ramanathan, Ahmed H Sabit, Fasika Woreta
Objective: With the advent of virtual interviews, the potential for interview hoarding by applicants became of greater concern due to lack of financial constraints associated with in-person interviewing. Simultaneously, the average number of applications submitted each year is rising. Currently there is no cap to the number of applications or interviews an applicant may complete when applying to residency, with the exception of ophthalmology with a cap of 15 interviews. No studies have assessed the applicants' perspectives on an application or interview cap. We assessed the attitudes of surgical subspecialty applicants towards capping, which may be useful when considering innovations in residency selection.
Design/setting/participants: About 1841 applicants to the Johns Hopkins' ophthalmology, urology, plastic surgery, and orthopedic surgery residency programs from the 2022-2023 cycle were invited to respond to a 22-item questionnaire. Statistical analyses of aggregate data were conducted using R.
Results: Of the 776/1841 (42%) responses, 288 (40%) were in support of an application cap, while 455 (63%) were in support of an interview cap. Specialty (p < 0.001), gender (p < 0.001), taking a gap year (p = 0.02), medical school region (p = 0.04), and number of interviews accepted off of a waitlist (p = 0.01) were all significantly associated with a difference in opinion regarding an application cap. Specialty (p < 0.001), USMLE Step 1 score (p = 0.004), number of interviews (p < 0.001), and number of programs ranked (p < 0.001) were all significantly associated with a difference in opinion regarding an interview cap. Of those applicants who were in support of the respective caps they believed that on average a cap should consist of 48.1 (16.1) applications and 16.0 (8.0) interviews.
Conclusions: Our findings highlight the desire for interview caps among the majority of applicants to surgical subspecialties and thus this innovation may be considered by other specialties in the era of virtual interviews.
{"title":"Perspectives on Application and Interview Capping in Residency Selection of Surgical Subspecialties.","authors":"Shwetha Mudalegundi, Marisa Clifton, Scott Lifchez, Dawn LaPorte, Saras Ramanathan, Ahmed H Sabit, Fasika Woreta","doi":"10.1016/j.jsurg.2024.04.007","DOIUrl":"10.1016/j.jsurg.2024.04.007","url":null,"abstract":"<p><strong>Objective: </strong>With the advent of virtual interviews, the potential for interview hoarding by applicants became of greater concern due to lack of financial constraints associated with in-person interviewing. Simultaneously, the average number of applications submitted each year is rising. Currently there is no cap to the number of applications or interviews an applicant may complete when applying to residency, with the exception of ophthalmology with a cap of 15 interviews. No studies have assessed the applicants' perspectives on an application or interview cap. We assessed the attitudes of surgical subspecialty applicants towards capping, which may be useful when considering innovations in residency selection.</p><p><strong>Design/setting/participants: </strong>About 1841 applicants to the Johns Hopkins' ophthalmology, urology, plastic surgery, and orthopedic surgery residency programs from the 2022-2023 cycle were invited to respond to a 22-item questionnaire. Statistical analyses of aggregate data were conducted using R.</p><p><strong>Results: </strong>Of the 776/1841 (42%) responses, 288 (40%) were in support of an application cap, while 455 (63%) were in support of an interview cap. Specialty (p < 0.001), gender (p < 0.001), taking a gap year (p = 0.02), medical school region (p = 0.04), and number of interviews accepted off of a waitlist (p = 0.01) were all significantly associated with a difference in opinion regarding an application cap. Specialty (p < 0.001), USMLE Step 1 score (p = 0.004), number of interviews (p < 0.001), and number of programs ranked (p < 0.001) were all significantly associated with a difference in opinion regarding an interview cap. Of those applicants who were in support of the respective caps they believed that on average a cap should consist of 48.1 (16.1) applications and 16.0 (8.0) interviews.</p><p><strong>Conclusions: </strong>Our findings highlight the desire for interview caps among the majority of applicants to surgical subspecialties and thus this innovation may be considered by other specialties in the era of virtual interviews.</p>","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":" ","pages":"1013-1023"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140961225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.jsurg.2023.11.017
Neha Sharma, Emily Steinhagen, Jeffrey M Marks, J. Ammori
{"title":"Development of a Competency Framework Defining Effective Surgical Educators.","authors":"Neha Sharma, Emily Steinhagen, Jeffrey M Marks, J. Ammori","doi":"10.1016/j.jsurg.2023.11.017","DOIUrl":"https://doi.org/10.1016/j.jsurg.2023.11.017","url":null,"abstract":"","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":"443 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139020166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.jsurg.2023.11.007
Michael Ho-Yan Lee, Yajur Iyengar, Dan Budiansky, P. Veinot, M. Law
{"title":"Exploring Medical Students' Perceptions of Peer-to-Peer Interactions Related to Applying to a Surgical Residency.","authors":"Michael Ho-Yan Lee, Yajur Iyengar, Dan Budiansky, P. Veinot, M. Law","doi":"10.1016/j.jsurg.2023.11.007","DOIUrl":"https://doi.org/10.1016/j.jsurg.2023.11.007","url":null,"abstract":"","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":"333 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139014520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eva Deveze, A. Traoré, Nicolas Ribault, D. Estoppey, Benoît Latelise, H. Fournier, N. Bigorre
INTRODUCTION In surgical learning, self-assessment allows the physician to identify and improve his strong and weak points. However, its scientific validity has yet to be demonstrated. The aim of this study was to analyze if there is a link between self-assessment accuracy and improvement in surgical skills. We make the hypothesis that an accurate self-assessment allows a greater improvement MATERIAL AND METHOD: We set up a retrospective cohort study at the tertiary University Hospital of Angers. Between 2019 and 2021, twenty-eight surgery residents took part into a microsurgery program and were included in the study. For two weeks, they performed anastomosis training on inert material and living anesthetized rats under microscope. Each resident was evaluated during the workshop by senior surgeons on 10 items: movement stability and fluidity, instrument manipulation, needles, dissection, clamp setting, vessel manipulation, suture, checking before clamp removal, checking after clamp removal, watertighness. Self-assessment was performed by the residents with the same grid, at the end of the workshop. Residents' and senior's evaluations were double-blind. We retrospectively analyzed the concordance between senior objective assessment and self-assessment, and the effect of an accurate self-assessment on technical improvement. RESULTS Data for twenty-five residents were analyzed, 14 were female (56%). The mean age was 29 years. Surgical specialties were orthopedics (44%), maxillofacial surgery (45.4%), neurosurgery (12%), gynecology (4%) and vascular surgery (4%). According to Cohen's kappa coefficient, 14 residents (56%) underestimated themselves, 7 (28%) were concordant with peer-assessment and 4 (16%) overestimated themselves. The concordance between self and peer assessment during sessions was positive for the most objective items, and negative for the most subjective items. Technical skills improvement in term of peer-assessment averages was positive for each item in each group, without statistical differences between groups. CONCLUSION We found that the ability to self-assess in a fast-track microsurgery module for surgery residents varied according to analyzed gestures. We demonstrated an improvement in term of self-assessment for objective items, and a decrease for subjective items. However, we didn't find any relation between improvement curve and the accuracy of self-assessment.
{"title":"Self-Assessment Versus Peer-Assessment in Microsurgery Learning: A Comparative Retrospective Study in a Surgery Residents Cohort.","authors":"Eva Deveze, A. Traoré, Nicolas Ribault, D. Estoppey, Benoît Latelise, H. Fournier, N. Bigorre","doi":"10.2139/ssrn.4170638","DOIUrl":"https://doi.org/10.2139/ssrn.4170638","url":null,"abstract":"INTRODUCTION\u0000In surgical learning, self-assessment allows the physician to identify and improve his strong and weak points. However, its scientific validity has yet to be demonstrated. The aim of this study was to analyze if there is a link between self-assessment accuracy and improvement in surgical skills. We make the hypothesis that an accurate self-assessment allows a greater improvement MATERIAL AND METHOD: We set up a retrospective cohort study at the tertiary University Hospital of Angers. Between 2019 and 2021, twenty-eight surgery residents took part into a microsurgery program and were included in the study. For two weeks, they performed anastomosis training on inert material and living anesthetized rats under microscope. Each resident was evaluated during the workshop by senior surgeons on 10 items: movement stability and fluidity, instrument manipulation, needles, dissection, clamp setting, vessel manipulation, suture, checking before clamp removal, checking after clamp removal, watertighness. Self-assessment was performed by the residents with the same grid, at the end of the workshop. Residents' and senior's evaluations were double-blind. We retrospectively analyzed the concordance between senior objective assessment and self-assessment, and the effect of an accurate self-assessment on technical improvement.\u0000\u0000\u0000RESULTS\u0000Data for twenty-five residents were analyzed, 14 were female (56%). The mean age was 29 years. Surgical specialties were orthopedics (44%), maxillofacial surgery (45.4%), neurosurgery (12%), gynecology (4%) and vascular surgery (4%). According to Cohen's kappa coefficient, 14 residents (56%) underestimated themselves, 7 (28%) were concordant with peer-assessment and 4 (16%) overestimated themselves. The concordance between self and peer assessment during sessions was positive for the most objective items, and negative for the most subjective items. Technical skills improvement in term of peer-assessment averages was positive for each item in each group, without statistical differences between groups.\u0000\u0000\u0000CONCLUSION\u0000We found that the ability to self-assess in a fast-track microsurgery module for surgery residents varied according to analyzed gestures. We demonstrated an improvement in term of self-assessment for objective items, and a decrease for subjective items. However, we didn't find any relation between improvement curve and the accuracy of self-assessment.","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45978712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1016/j.jamcollsurg.2021.07.472
Sarah Lund, Jonathan D. D’Angelo, Anne-Lise D. D’Angelo, Stephanie F. Heller, John Stulak, Mariela Rivera
{"title":"New Heuristics to Stratify Applicants: Predictors of General Surgery Residency Applicant Step 1 Scores.","authors":"Sarah Lund, Jonathan D. D’Angelo, Anne-Lise D. D’Angelo, Stephanie F. Heller, John Stulak, Mariela Rivera","doi":"10.1016/j.jamcollsurg.2021.07.472","DOIUrl":"https://doi.org/10.1016/j.jamcollsurg.2021.07.472","url":null,"abstract":"","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47196197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1016/J.JAMCOLLSURG.2020.07.392
Julia D. Nedimyer, Atsusi Hirumi, J. Cendan
{"title":"Rigorous Curricular Innovation: Development, Integration, and Evaluation of Anatomic Clinical Correlations Module.","authors":"Julia D. Nedimyer, Atsusi Hirumi, J. Cendan","doi":"10.1016/J.JAMCOLLSURG.2020.07.392","DOIUrl":"https://doi.org/10.1016/J.JAMCOLLSURG.2020.07.392","url":null,"abstract":"","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/J.JAMCOLLSURG.2020.07.392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41426779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1016/j.jamcollsurg.2019.08.555
A. F. Bryan, Darren S. Bryan, J. Matthews, K. Roggin
{"title":"Toward Autonomy and Conditional Independence: A Standardized Script Improves Patient Acceptance of Surgical Trainee Roles.","authors":"A. F. Bryan, Darren S. Bryan, J. Matthews, K. Roggin","doi":"10.1016/j.jamcollsurg.2019.08.555","DOIUrl":"https://doi.org/10.1016/j.jamcollsurg.2019.08.555","url":null,"abstract":"","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jamcollsurg.2019.08.555","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49624607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1016/J.JAMCOLLSURG.2018.07.493
Aabra Ahmed, Sharjeel Israr, K. Chapple, J. Weinberg, P. Goslar, Joel Hayden, R. Gagliano, Thomas L. Gillespie
{"title":"Patient Perception of Medical Student Professionalism: Does Attire Matter?","authors":"Aabra Ahmed, Sharjeel Israr, K. Chapple, J. Weinberg, P. Goslar, Joel Hayden, R. Gagliano, Thomas L. Gillespie","doi":"10.1016/J.JAMCOLLSURG.2018.07.493","DOIUrl":"https://doi.org/10.1016/J.JAMCOLLSURG.2018.07.493","url":null,"abstract":"","PeriodicalId":94109,"journal":{"name":"Journal of surgical education","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/J.JAMCOLLSURG.2018.07.493","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42403995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}