Pub Date : 2025-10-01Epub Date: 2025-07-04DOI: 10.1177/0272989X251346213
Marina Motsenok, Tehila Kogut
BackgroundResearch suggests that the method used to determine voluntary consent (i.e., opt-in versus opt-out policies) greatly affects the number of registered organ donors in various countries. Although the concept of organ transplantation is broadly supported, the relatively low percentage of registered donors in opt-in countries is puzzling. We suggest that deviating from the status quo (such as signing an organ donor card in opt-in countries or removing oneself from the list of registered donors in opt-out countries) heightens one's sense of vulnerability.DesignWe examined our prediction in 2 online experiments involving participants from the United States (studies 1 and 2), which has an opt-in organ-donation policy, and from the United Kingdom (study 2), a country that has recently changed its policy to opt out.ResultsIn study 1, registered organ donors perceived their vulnerability as greater after being reminded of their decision, but vulnerability perceptions were not affected by such a reminder among nondonors who upheld the status quo. In study 2, imagining oneself making an organ donation decision that deviates from the status quo (signing a commitment under an opt-in policy or removing oneself from the registered donors list under an opt-out policy) increased participants' perceived personal vulnerability.ConclusionsThe decision to become an organ donor may affect individuals' sense of physical vulnerability, depending on their country's donation policy. Potentially, deviating from the status quo may curtail willingness for organ donation. Understanding the psychological barriers to organ donation may help overcome them by presenting the issue in a manner that takes such perceptions into account. We recommend future research to explore whether this heightened sense of vulnerability potentially deters organ donation in opt-in countries.HighlightsThe decision to become an organ donor may affect individuals' sense of physical vulnerability, depending on their country's donation policy (opt in versus opt out).Registered organ donors perceived their vulnerability as greater after being reminded of their decision, but vulnerability perceptions were not affected by such a reminder among nondonors who upheld the status quo.Imagining oneself making an organ donation decision that deviates from the status quo (signing a commitment under an opt-in policy or removing oneself from the registered donors list under an opt-out policy) increased participants' perceived personal vulnerability.Future research is needed to examine whether this heightened sense of vulnerability affects actual organ donation decisions.
{"title":"Organ Donation Decisions: When Deviating from the Status Quo Heightens Perceived Vulnerability.","authors":"Marina Motsenok, Tehila Kogut","doi":"10.1177/0272989X251346213","DOIUrl":"10.1177/0272989X251346213","url":null,"abstract":"<p><p>BackgroundResearch suggests that the method used to determine voluntary consent (i.e., opt-in versus opt-out policies) greatly affects the number of registered organ donors in various countries. Although the concept of organ transplantation is broadly supported, the relatively low percentage of registered donors in opt-in countries is puzzling. We suggest that deviating from the status quo (such as signing an organ donor card in opt-in countries or removing oneself from the list of registered donors in opt-out countries) heightens one's sense of vulnerability.DesignWe examined our prediction in 2 online experiments involving participants from the United States (studies 1 and 2), which has an opt-in organ-donation policy, and from the United Kingdom (study 2), a country that has recently changed its policy to opt out.ResultsIn study 1, registered organ donors perceived their vulnerability as greater after being reminded of their decision, but vulnerability perceptions were not affected by such a reminder among nondonors who upheld the status quo. In study 2, imagining oneself making an organ donation decision that deviates from the status quo (signing a commitment under an opt-in policy or removing oneself from the registered donors list under an opt-out policy) increased participants' perceived personal vulnerability.ConclusionsThe decision to become an organ donor may affect individuals' sense of physical vulnerability, depending on their country's donation policy. Potentially, deviating from the status quo may curtail willingness for organ donation. Understanding the psychological barriers to organ donation may help overcome them by presenting the issue in a manner that takes such perceptions into account. We recommend future research to explore whether this heightened sense of vulnerability potentially deters organ donation in opt-in countries.HighlightsThe decision to become an organ donor may affect individuals' sense of physical vulnerability, depending on their country's donation policy (opt in versus opt out).Registered organ donors perceived their vulnerability as greater after being reminded of their decision, but vulnerability perceptions were not affected by such a reminder among nondonors who upheld the status quo.Imagining oneself making an organ donation decision that deviates from the status quo (signing a commitment under an opt-in policy or removing oneself from the registered donors list under an opt-out policy) increased participants' perceived personal vulnerability.Future research is needed to examine whether this heightened sense of vulnerability affects actual organ donation decisions.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"862-872"},"PeriodicalIF":3.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-14DOI: 10.1177/0272989X251346894
Odilon Quentin Assan, Claude Bernard Uwizeye, Hervé Tchala Vignon Zomahoun, Oscar Nduwimana, Wilhelm Dubuisson, Guillaume Sillon, Danielle Bergeron, Stéphane Groulx, Wilber Deck, Anik Giguère, France Légaré
Decision aids (DA) are more likely to be adopted if co-developed with stakeholders and culturally adapted. Using the DEVELOPTOOLS Reporting Checklist, we describe a process for rapid co-development of a culturally adapted DA prototype for population-wide cancer-screening programs. Our systematic, collaborative, and iterative methodology had 7 phases: 1) set up the process by adopting best governance practices (e.g., identify and engage stakeholders, adapt our collaborative DA design process, validate development process), with governance comprising 20 individuals from a wide range of sectors including at least 2 citizens; 2) identify and analyze existing DAs relevant to the cancerscreening of interest by conducting a systematic review; 3) share results with stakeholders and make recommendations; 4) formulate Quebec-specific DA content and consult stakeholders including users by conducting e-Delphi surveys; 5) co-design a prototype with stakeholders, including users, following international DA standards; 6) translate the DA using translation-back translation approaches and deploy; and 7) knowledge mobilization (KMb) using end-of-grant and integrated KMb activities. Using the User-Centred Design 11-Item Measure (UCD-11), our proposed process scored 10 of 11 on the UCD-11. Overall, we expect this new co-developed process to ensure that good-quality, user-centered, and culturally adapted DAs for cancer screening are produced within reasonable timeframes. We also expect it to foster the adoption of the DAs.HighlightsWe report on a 7-step process for collaborating with various stakeholders to create a culturally adapted decision aid (DA) prototype for deciding about cancer screening in Quebec, Canada.The process includes: ○ Making sure the DA prototype design includes users and other interested parties and reflects their needs, perceptions, values, and preferences.○ Finding and analyzing existing DAs on cancer screening to decide what ours should include○ Respecting international standards and criteria for DA design○ Repeated rounds of expert consensus about the exact content, with revisions between each roundThis method could help the rapid creation of DAs shaped by users' interests and will ultimately encourage shared decision making.
{"title":"Process for Rapid Co-development of a Decision Aid Prototype for Population-wide Cancer Screening.","authors":"Odilon Quentin Assan, Claude Bernard Uwizeye, Hervé Tchala Vignon Zomahoun, Oscar Nduwimana, Wilhelm Dubuisson, Guillaume Sillon, Danielle Bergeron, Stéphane Groulx, Wilber Deck, Anik Giguère, France Légaré","doi":"10.1177/0272989X251346894","DOIUrl":"10.1177/0272989X251346894","url":null,"abstract":"<p><p>Decision aids (DA) are more likely to be adopted if co-developed with stakeholders and culturally adapted. Using the DEVELOPTOOLS Reporting Checklist, we describe a process for rapid co-development of a culturally adapted DA prototype for population-wide cancer-screening programs. Our systematic, collaborative, and iterative methodology had 7 phases: 1) set up the process by adopting best governance practices (e.g., identify and engage stakeholders, adapt our collaborative DA design process, validate development process), with governance comprising 20 individuals from a wide range of sectors including at least 2 citizens; 2) identify and analyze existing DAs relevant to the cancerscreening of interest by conducting a systematic review; 3) share results with stakeholders and make recommendations; 4) formulate Quebec-specific DA content and consult stakeholders including users by conducting e-Delphi surveys; 5) co-design a prototype with stakeholders, including users, following international DA standards; 6) translate the DA using translation-back translation approaches and deploy; and 7) knowledge mobilization (KMb) using end-of-grant and integrated KMb activities. Using the User-Centred Design 11-Item Measure (UCD-11), our proposed process scored 10 of 11 on the UCD-11. Overall, we expect this new co-developed process to ensure that good-quality, user-centered, and culturally adapted DAs for cancer screening are produced within reasonable timeframes. We also expect it to foster the adoption of the DAs.HighlightsWe report on a 7-step process for collaborating with various stakeholders to create a culturally adapted decision aid (DA) prototype for deciding about cancer screening in Quebec, Canada.The process includes: ○ Making sure the DA prototype design includes users and other interested parties and reflects their needs, perceptions, values, and preferences.○ Finding and analyzing existing DAs on cancer screening to decide what ours should include○ Respecting international standards and criteria for DA design○ Repeated rounds of expert consensus about the exact content, with revisions between each roundThis method could help the rapid creation of DAs shaped by users' interests and will ultimately encourage shared decision making.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"775-793"},"PeriodicalIF":3.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-10DOI: 10.1177/0272989X251346203
Patricia Kenny, Deborah J Street, Jane Hall
IntroductionSocietal preferences over different health states are used to guide service planning, but there has been little investigation of treatment preferences at the end of life. This study aimed to examine population preferences for active treatment or palliation for cancer patients when life expectancy is limited and the relative importance of time spent in hospital or with functional limitation.MethodsWe used a discrete choice experiment that presented respondents with a series of hypothetical patients who had died, describing their last few months of life. Respondents selected the end-of-life alternative they thought best. Data were collected from 1,502 Australian adults participating in an online survey panel. Latent class analysis was used to identify groups with different preference patterns.ResultsFour preference groups were identified along with an additional group that we termed inattentive, as they appeared to respond at random. Among the 1,070 respondents assigned to 1 of the 4 preference groups, 33.5% favored longer overall survival regardless of how that time was spent; 26.1% were willing to accept a shorter survival time for less time in the hospital or completely incapacitated at home, and they had a stronger preference for palliative care in older patients; 22.5% strongly supported the use of palliative care regardless of the age of the patients, preferring less time in the hospital or time at home with any functional limitations; and 17.9% had a strong preference to not use palliative care.ConclusionsOur results show distinct heterogeneity in population preferences for end-of-life care. Policy goals and service planning should acknowledge this heterogeneity and provide end-of-life support services that offer the flexibility to enhance patient choice. Many current funding approaches are not consistent with the philosophy of patient-centered care. Policy makers can and should be exploring innovative approaches to improve efficiency and equity.HighlightsSocial preferences, based on a general population survey, vary across palliative and active care approaches.Preferences for palliative care and willingness to tolerate time in hospital and time at home with activity limitations varied within the groups willing to trade quality and quantity of life.Policy, resource allocation, and funding methods should accommodate this variability.
{"title":"Population Preferences for Treatment in Life-Limiting Illness: Valuing the Way Time Is Spent at the End of Life.","authors":"Patricia Kenny, Deborah J Street, Jane Hall","doi":"10.1177/0272989X251346203","DOIUrl":"10.1177/0272989X251346203","url":null,"abstract":"<p><p>IntroductionSocietal preferences over different health states are used to guide service planning, but there has been little investigation of treatment preferences at the end of life. This study aimed to examine population preferences for active treatment or palliation for cancer patients when life expectancy is limited and the relative importance of time spent in hospital or with functional limitation.MethodsWe used a discrete choice experiment that presented respondents with a series of hypothetical patients who had died, describing their last few months of life. Respondents selected the end-of-life alternative they thought best. Data were collected from 1,502 Australian adults participating in an online survey panel. Latent class analysis was used to identify groups with different preference patterns.ResultsFour preference groups were identified along with an additional group that we termed <i>inattentive</i>, as they appeared to respond at random. Among the 1,070 respondents assigned to 1 of the 4 preference groups, 33.5% favored longer overall survival regardless of how that time was spent; 26.1% were willing to accept a shorter survival time for less time in the hospital or completely incapacitated at home, and they had a stronger preference for palliative care in older patients; 22.5% strongly supported the use of palliative care regardless of the age of the patients, preferring less time in the hospital or time at home with any functional limitations; and 17.9% had a strong preference to not use palliative care.ConclusionsOur results show distinct heterogeneity in population preferences for end-of-life care. Policy goals and service planning should acknowledge this heterogeneity and provide end-of-life support services that offer the flexibility to enhance patient choice. Many current funding approaches are not consistent with the philosophy of patient-centered care. Policy makers can and should be exploring innovative approaches to improve efficiency and equity.HighlightsSocial preferences, based on a general population survey, vary across palliative and active care approaches.Preferences for palliative care and willingness to tolerate time in hospital and time at home with activity limitations varied within the groups willing to trade quality and quantity of life.Policy, resource allocation, and funding methods should accommodate this variability.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"849-861"},"PeriodicalIF":3.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-24DOI: 10.1177/0272989X251346844
Tran T Doan, Bradley E Iott
IntroductionHospitals are interested in improving the quality of data disaggregation and collection to advance diversity, equity, and inclusion (DEI) goals. We evaluated the extent to which hospitals are adopting DEI disaggregated data to inform organizational decisions and the characteristics associated with this adoption.MethodsWe analyzed data from the 2022 American Hospital Association Annual Survey, which included the final iteration of a new survey item about hospital DEI disaggregated data adoption for decision making. Descriptive statistics, logistic regression, and negative binomial regression were used to evaluate this survey item.ResultsAmong hospitals adopting DEI disaggregated data (n = 2,596, 41.9%), two-thirds used these data to inform decisions about patient outcomes, half about training or professional development, and one-third about supply chain or procurement. Larger, tax-exempt, Veteran Affairs, or metropolitan hospitals are significantly more likely to adopt DEI disaggregated data for decision making.LimitationsOur work is limited by the reporting of 1-y cross-sectional results.ConclusionsMost hospitals adopt DEI disaggregated data to inform decisions about patient outcomes. Future research should explore whether hospital decisions or disaggregated data adoption have advanced DEI and health equity for underserved communities.ImplicationsAnalysis of disaggregated data adoption could reveal how hospitals make decisions and funding allocations to advance DEI goals and health equity.HighlightsThere is a limited understanding of the extent to which hospitals adopt diversity, equity, and inclusion (DEI) disaggregated data to inform organizational decision making, highlighting a knowledge gap at the intersection of data equity and health care management.Among hospitals that adopt DEI disaggregated data, two-thirds use them to inform organizational decisions about patient outcomes, and half about professional development.Larger, tax-exempt, Veteran Affairs, or metropolitan hospitals are more likely to adopt DEI disaggregated data for organizational decision making.Future research is needed to explore whether hospital adoption of DEI disaggregated data has advanced DEI organizational goals and health equity for underserved populations.
{"title":"Hospital Adoption of Diversity, Equity, and Inclusion (DEI) Disaggregated Data for Organizational Decision Making.","authors":"Tran T Doan, Bradley E Iott","doi":"10.1177/0272989X251346844","DOIUrl":"10.1177/0272989X251346844","url":null,"abstract":"<p><p>IntroductionHospitals are interested in improving the quality of data disaggregation and collection to advance diversity, equity, and inclusion (DEI) goals. We evaluated the extent to which hospitals are adopting DEI disaggregated data to inform organizational decisions and the characteristics associated with this adoption.MethodsWe analyzed data from the 2022 American Hospital Association Annual Survey, which included the final iteration of a new survey item about hospital DEI disaggregated data adoption for decision making. Descriptive statistics, logistic regression, and negative binomial regression were used to evaluate this survey item.ResultsAmong hospitals adopting DEI disaggregated data (<i>n</i> = 2,596, 41.9%), two-thirds used these data to inform decisions about patient outcomes, half about training or professional development, and one-third about supply chain or procurement. Larger, tax-exempt, Veteran Affairs, or metropolitan hospitals are significantly more likely to adopt DEI disaggregated data for decision making.LimitationsOur work is limited by the reporting of 1-y cross-sectional results.ConclusionsMost hospitals adopt DEI disaggregated data to inform decisions about patient outcomes. Future research should explore whether hospital decisions or disaggregated data adoption have advanced DEI and health equity for underserved communities.ImplicationsAnalysis of disaggregated data adoption could reveal how hospitals make decisions and funding allocations to advance DEI goals and health equity.HighlightsThere is a limited understanding of the extent to which hospitals adopt diversity, equity, and inclusion (DEI) disaggregated data to inform organizational decision making, highlighting a knowledge gap at the intersection of data equity and health care management.Among hospitals that adopt DEI disaggregated data, two-thirds use them to inform organizational decisions about patient outcomes, and half about professional development.Larger, tax-exempt, Veteran Affairs, or metropolitan hospitals are more likely to adopt DEI disaggregated data for organizational decision making.Future research is needed to explore whether hospital adoption of DEI disaggregated data has advanced DEI organizational goals and health equity for underserved populations.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"917-922"},"PeriodicalIF":3.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-24DOI: 10.1177/0272989X251346799
Natalie C Benda, Brian J Zikmund-Fisher, Jessica S Ancker
BackgroundResearch with lay audiences (e.g., patients, the public) can inform the communication of health-related numerical information. However, a recent systematic review (Making Numbers Meaningful) highlighted several common issues in the literature that impair readers' ability to evaluate and replicate these studies.PurposeTo create a set of guidelines for reporting research regarding the research on communicating numbers to lay audiences for health-related purposes.Reporting RecommendationsWe present 6 common reporting issues from research on communicating numbers that pertain to the background motivating the study, experimental design and analysis reporting, description of the outcomes, and reporting of the data presentation formats. To address these issues, we propose a set of 7 reporting guidelines including 1) specifying how study objectives address a gap in evidence on research on communicating numbers, 2) clearly reporting all combinations of data presentation formats (experimental conditions) compared, 3) providing verbatim examples of the data that were presented to the audience, 4) describing whether or not participants had access to the data presentation formats while outcomes were assessed, 5) reporting the wording of all outcome measures, 6) using standardized terms for both outcomes and data presentation formats, and 7) ensuring that broad outcome concepts such as gist, comprehension, or knowledge are concretely defined.ConclusionsFuture studies involving research on communicating health-related numbers should use these guidelines to improve the quality of reporting and ease of evidence synthesis in future efforts.HighlightsOur systematic review allowed us to exhaustively identify and enumerate several common reporting issues from research on communicating numbers that make it challenging to synthesize evidence.Reporting issues involved not including the background motivating the gap the study addresses, insufficiently describing experimental designs and analyses, and failing to report information regarding the outcomes measured.We propose 7 reporting guidelines for future research on communicating numbers to address the issues detected:1. Specification of how study objectives address a gap in evidence on research communicating numbers2. Clearly reporting all combinations of data presentation format elements compared3. Providing verbatim examples of the data presentation formats4. Describing whether participants had access to the data presentation formats while outcomes were assessed5. Reporting the wording of all outcome measures6. Using standardized terms for both outcomes and data presentation formats7. Ensuring that broad outcome concepts such as gist, comprehension, or knowledge are concretely definedImplementation of these guidelines will facilitate knowledge synthesis of research on communicating numbers and support creating evidence-based guidelines of best practices for communicating health-related numbers to lay
{"title":"How to Report Research on the Communication of Health-Related Numbers: The Research on Communicating Numbers (ReCoN) Guidelines.","authors":"Natalie C Benda, Brian J Zikmund-Fisher, Jessica S Ancker","doi":"10.1177/0272989X251346799","DOIUrl":"10.1177/0272989X251346799","url":null,"abstract":"<p><p>BackgroundResearch with lay audiences (e.g., patients, the public) can inform the communication of health-related numerical information. However, a recent systematic review (Making Numbers Meaningful) highlighted several common issues in the literature that impair readers' ability to evaluate and replicate these studies.PurposeTo create a set of guidelines for reporting research regarding the research on communicating numbers to lay audiences for health-related purposes.Reporting RecommendationsWe present 6 common reporting issues from research on communicating numbers that pertain to the background motivating the study, experimental design and analysis reporting, description of the outcomes, and reporting of the data presentation formats. To address these issues, we propose a set of 7 reporting guidelines including 1) specifying how study objectives address a gap in evidence on research on communicating numbers, 2) clearly reporting all combinations of data presentation formats (experimental conditions) compared, 3) providing verbatim examples of the data that were presented to the audience, 4) describing whether or not participants had access to the data presentation formats while outcomes were assessed, 5) reporting the wording of all outcome measures, 6) using standardized terms for both outcomes and data presentation formats, and 7) ensuring that broad outcome concepts such as gist, comprehension, or knowledge are concretely defined.ConclusionsFuture studies involving research on communicating health-related numbers should use these guidelines to improve the quality of reporting and ease of evidence synthesis in future efforts.HighlightsOur systematic review allowed us to exhaustively identify and enumerate several common reporting issues from research on communicating numbers that make it challenging to synthesize evidence.Reporting issues involved not including the background motivating the gap the study addresses, insufficiently describing experimental designs and analyses, and failing to report information regarding the outcomes measured.We propose 7 reporting guidelines for future research on communicating numbers to address the issues detected:1. Specification of how study objectives address a gap in evidence on research communicating numbers2. Clearly reporting all combinations of data presentation format elements compared3. Providing verbatim examples of the data presentation formats4. Describing whether participants had access to the data presentation formats while outcomes were assessed5. Reporting the wording of all outcome measures6. Using standardized terms for both outcomes and data presentation formats7. Ensuring that broad outcome concepts such as gist, comprehension, or knowledge are concretely definedImplementation of these guidelines will facilitate knowledge synthesis of research on communicating numbers and support creating evidence-based guidelines of best practices for communicating health-related numbers to lay ","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"826-833"},"PeriodicalIF":3.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12353949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-07DOI: 10.1177/0272989X251346811
Brian J Zikmund-Fisher, Natalie C Benda, Jessica S Ancker
PurposeTo summarize the degree to which evidence from our recent Making Numbers Meaningful (MNM) systematic review of the effects of data presentation format on communication of health numbers supports recommendations from the 2021 International Patient Decision Aids Standards (IPDAS) Collaboration papers on presenting probabilities.MethodsThe MNM review generated 1,119 distinct findings (derived from 316 papers) related to communication of probabilities to patients or other lay audiences, classifying each finding by its relation to audience task, type of stimulus (data and data presentation format), and up to 10 distinct sets of outcomes: identification and/or recall, contrast, categorization, computation, probability perceptions and/or feelings, effectiveness perceptions and/or feelings, behavioral intentions or behavior, trust, preference, and discrimination. Here, we summarize the findings related to each of the 35 IPDAS paper recommendations.ResultsStrong evidence exists to support several IPDAS recommendations, including those related to the use of part-to-whole graphical formats (e.g., icon arrays) and avoidance of verbal probability terms, 1-in-X formats, and relative risk formats to prevent amplification of probability perceptions, effectiveness perceptions, and/or behavioral intentions as well as the use of consistent denominators to improve computation outcomes. However, the evidence base appears weaker and less complete for other IPDAS recommendations (e.g., recommendations regarding numerical estimates in context and evaluative labels). The IPDAS papers and the MNM review agree that both communication of uncertainty and use of interactive formats need further research.ConclusionsThe idea that no one visual or numerical format is optimal for every probability communication situation is both an IPDAS panel recommendation and foundational to the MNM project's design. Although no MNM evidence contradicts IPDAS recommendations, the evidence base needed to support many common probability communication recommendations remains incomplete.HighlightsThe Making Numbers Meaningful (MNM) systematic review of the literature on communicating health numbers provides mixed support for the recommendations of the 2021 International Patient Decision Aids Standards (IPDAS) evidence papers on presenting probabilities in patient decision aids.Both the IPDAS papers and the MNM project agree that no single visual or numerical format is optimal for every probability communication situation.The MNM review provides strong evidentiary support for IPDAS recommendations in favor of using part-to-whole graphical formats (e.g., icon arrays) and consistent denominators.The MNM review also supports the IPDAS cautions against verbal probability terms and 1-in-X formats as well as its concerns about the potential biasing effects of relative risk formats and framing.MNM evidence is weaker related to IPDAS recommendations about placing numerical estimates in context
{"title":"Evidence on Methods for Communicating Health-Related Probabilities: Comparing the Making Numbers Meaningful Systematic Review to the 2021 IPDAS Evidence Paper Recommendations.","authors":"Brian J Zikmund-Fisher, Natalie C Benda, Jessica S Ancker","doi":"10.1177/0272989X251346811","DOIUrl":"10.1177/0272989X251346811","url":null,"abstract":"<p><p>PurposeTo summarize the degree to which evidence from our recent Making Numbers Meaningful (MNM) systematic review of the effects of data presentation format on communication of health numbers supports recommendations from the 2021 International Patient Decision Aids Standards (IPDAS) Collaboration papers on presenting probabilities.MethodsThe MNM review generated 1,119 distinct findings (derived from 316 papers) related to communication of probabilities to patients or other lay audiences, classifying each finding by its relation to audience task, type of stimulus (data and data presentation format), and up to 10 distinct sets of outcomes: identification and/or recall, contrast, categorization, computation, probability perceptions and/or feelings, effectiveness perceptions and/or feelings, behavioral intentions or behavior, trust, preference, and discrimination. Here, we summarize the findings related to each of the 35 IPDAS paper recommendations.ResultsStrong evidence exists to support several IPDAS recommendations, including those related to the use of part-to-whole graphical formats (e.g., icon arrays) and avoidance of verbal probability terms, 1-in-X formats, and relative risk formats to prevent amplification of probability perceptions, effectiveness perceptions, and/or behavioral intentions as well as the use of consistent denominators to improve computation outcomes. However, the evidence base appears weaker and less complete for other IPDAS recommendations (e.g., recommendations regarding numerical estimates in context and evaluative labels). The IPDAS papers and the MNM review agree that both communication of uncertainty and use of interactive formats need further research.ConclusionsThe idea that no one visual or numerical format is optimal for every probability communication situation is both an IPDAS panel recommendation and foundational to the MNM project's design. Although no MNM evidence contradicts IPDAS recommendations, the evidence base needed to support many common probability communication recommendations remains incomplete.HighlightsThe Making Numbers Meaningful (MNM) systematic review of the literature on communicating health numbers provides mixed support for the recommendations of the 2021 International Patient Decision Aids Standards (IPDAS) evidence papers on presenting probabilities in patient decision aids.Both the IPDAS papers and the MNM project agree that no single visual or numerical format is optimal for every probability communication situation.The MNM review provides strong evidentiary support for IPDAS recommendations in favor of using part-to-whole graphical formats (e.g., icon arrays) and consistent denominators.The MNM review also supports the IPDAS cautions against verbal probability terms and 1-in-X formats as well as its concerns about the potential biasing effects of relative risk formats and framing.MNM evidence is weaker related to IPDAS recommendations about placing numerical estimates in context","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"794-810"},"PeriodicalIF":3.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12236432/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-24DOI: 10.1177/0272989X251346788
Jonathan Wang, Donald A Redelmeier
Artificial intelligence models display human-like cognitive biases when generating medical recommendations. We tested whether an explicit forewarning, "Please keep in mind cognitive biases and other pitfalls of reasoning," might mitigate biases in OpenAI's generative pretrained transformer large language model. We used 10 clinically nuanced cases to test specific biases with and without a forewarning. Responses from the forewarning group were 50% longer and discussed cognitive biases more than 100 times more frequently compared with responses from the control group. Despite these differences, the forewarning decreased overall bias by only 6.9%, and no bias was extinguished completely. These findings highlight the need for clinician vigilance when interpreting generated responses that might appear seemingly thoughtful and deliberate.HighlightsArtificial intelligence models can be warned to avoid racial and gender bias.Forewarning artificial intelligence models to avoid cognitive biases does not adequately mitigate multiple pitfalls of reasoning.Critical reasoning remains an important clinical skill for practicing physicians.
{"title":"Forewarning Artificial Intelligence about Cognitive Biases.","authors":"Jonathan Wang, Donald A Redelmeier","doi":"10.1177/0272989X251346788","DOIUrl":"10.1177/0272989X251346788","url":null,"abstract":"<p><p>Artificial intelligence models display human-like cognitive biases when generating medical recommendations. We tested whether an explicit forewarning, \"Please keep in mind cognitive biases and other pitfalls of reasoning,\" might mitigate biases in OpenAI's generative pretrained transformer large language model. We used 10 clinically nuanced cases to test specific biases with and without a forewarning. Responses from the forewarning group were 50% longer and discussed cognitive biases more than 100 times more frequently compared with responses from the control group. Despite these differences, the forewarning decreased overall bias by only 6.9%, and no bias was extinguished completely. These findings highlight the need for clinician vigilance when interpreting generated responses that might appear seemingly thoughtful and deliberate.HighlightsArtificial intelligence models can be warned to avoid racial and gender bias.Forewarning artificial intelligence models to avoid cognitive biases does not adequately mitigate multiple pitfalls of reasoning.Critical reasoning remains an important clinical skill for practicing physicians.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"913-916"},"PeriodicalIF":3.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413502/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-05-29DOI: 10.1177/0272989X251343082
Jiawen Deng, Mohamed E Elghobashy, Kathleen Zang, Shubh K Patel, Eddie Guo, Kiyan Heybati
Machine-learning (ML) models have the potential to transform health care by enabling more personalized and data-driven clinical decision making. However, their successful implementation in clinical practice requires careful consideration of factors beyond predictive accuracy. We provide an overview of essential considerations for developing clinically applicable ML models, including methods for assessing and improving calibration, selecting appropriate decision thresholds, enhancing model explainability, identifying and mitigating bias, as well as methods for robust validation. We also discuss strategies for improving accessibility to ML models and performing real-world testing.HighlightsThis tutorial provides clinicians with a comprehensive guide to implementing machine-learning classification models in clinical practice.Key areas covered include model calibration, threshold selection, explainability, bias mitigation, validation, and real-world testing, all of which are essential for the clinical deployment of machine-learning models.Following these guidance can help clinicians bridge the gap between machine-learning model development and real-world application and enhance patient care outcomes.
{"title":"So You've Got a High AUC, Now What? An Overview of Important Considerations when Bringing Machine-Learning Models from Computer to Bedside.","authors":"Jiawen Deng, Mohamed E Elghobashy, Kathleen Zang, Shubh K Patel, Eddie Guo, Kiyan Heybati","doi":"10.1177/0272989X251343082","DOIUrl":"10.1177/0272989X251343082","url":null,"abstract":"<p><p>Machine-learning (ML) models have the potential to transform health care by enabling more personalized and data-driven clinical decision making. However, their successful implementation in clinical practice requires careful consideration of factors beyond predictive accuracy. We provide an overview of essential considerations for developing clinically applicable ML models, including methods for assessing and improving calibration, selecting appropriate decision thresholds, enhancing model explainability, identifying and mitigating bias, as well as methods for robust validation. We also discuss strategies for improving accessibility to ML models and performing real-world testing.HighlightsThis tutorial provides clinicians with a comprehensive guide to implementing machine-learning classification models in clinical practice.Key areas covered include model calibration, threshold selection, explainability, bias mitigation, validation, and real-world testing, all of which are essential for the clinical deployment of machine-learning models.Following these guidance can help clinicians bridge the gap between machine-learning model development and real-world application and enhance patient care outcomes.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"640-653"},"PeriodicalIF":3.1,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12260203/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-12DOI: 10.1177/0272989X251340941
Doug Coyle, David Glynn, Jeremy D Goldhaber-Fiebert, Edward C F Wilson
IntroductionEconomic evaluations identify the best course of action by a decision maker with respect to the level of health within the overall population. Traditionally, they identify 1 optimal treatment choice. In many jurisdictions, multiple technologies can be covered for the same heterogeneous patient population, which limits the applicability of this framework for directly determining whether a new technology should be covered. This article explores the impact of different decision frameworks within this context.MethodsThree alternate decision frameworks were considered: the traditional normative framework in which only the optimal technology will be covered (normative); a commonly adopted framework in which the new technology is recommended for reimbursement only if it is optimal, with coverage of other technologies remaining as before (current); and a framework that assesses specifically whether coverage of the new technology is optimal, incorporating previous reimbursement decisions and the market share of current technologies (positivist). The implications of the frameworks were assessed using a simulated probabilistic Markov model for a chronic progressive condition.ResultsResults illustrate how the different frameworks can lead to different reimbursement recommendations. This in turn produces differences in population health effects and the resultant price reductions required for covering the new technology.ConclusionBy covering only the optimal treatment option, decision makers can maximize the level of health across a population. If decision makers are unwilling to defund technologies, however, the second best option of adopting the positivist framework has the greatest relevance with respect to deciding whether a new technology should be covered.HighlightsTraditionally, economic evaluations focus on identifying the optimal treatment choice.This paper considers three alternative decision frameworks, within the context of multiple technologies being covered for the same heterogeneous patient population.This paper highlight that if decision makers are unwilling to defund therapies, current approaches to assessing cost effectiveness may be non-optimal.
{"title":"Decision Frameworks for Assessing Cost-Effectiveness Given Previous Nonoptimal Decisions.","authors":"Doug Coyle, David Glynn, Jeremy D Goldhaber-Fiebert, Edward C F Wilson","doi":"10.1177/0272989X251340941","DOIUrl":"10.1177/0272989X251340941","url":null,"abstract":"<p><p>IntroductionEconomic evaluations identify the best course of action by a decision maker with respect to the level of health within the overall population. Traditionally, they identify 1 optimal treatment choice. In many jurisdictions, multiple technologies can be covered for the same heterogeneous patient population, which limits the applicability of this framework for directly determining whether a new technology should be covered. This article explores the impact of different decision frameworks within this context.MethodsThree alternate decision frameworks were considered: the traditional normative framework in which only the optimal technology will be covered (normative); a commonly adopted framework in which the new technology is recommended for reimbursement only if it is optimal, with coverage of other technologies remaining as before (current); and a framework that assesses specifically whether coverage of the new technology is optimal, incorporating previous reimbursement decisions and the market share of current technologies (positivist). The implications of the frameworks were assessed using a simulated probabilistic Markov model for a chronic progressive condition.ResultsResults illustrate how the different frameworks can lead to different reimbursement recommendations. This in turn produces differences in population health effects and the resultant price reductions required for covering the new technology.ConclusionBy covering only the optimal treatment option, decision makers can maximize the level of health across a population. If decision makers are unwilling to defund technologies, however, the second best option of adopting the positivist framework has the greatest relevance with respect to deciding whether a new technology should be covered.HighlightsTraditionally, economic evaluations focus on identifying the optimal treatment choice.This paper considers three alternative decision frameworks, within the context of multiple technologies being covered for the same heterogeneous patient population.This paper highlight that if decision makers are unwilling to defund therapies, current approaches to assessing cost effectiveness may be non-optimal.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"703-713"},"PeriodicalIF":3.1,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12260196/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-07-04DOI: 10.1177/0272989X251349489
Ravi B Parikh, William J Ferrell, Anthony Girard, Jenna White, Sophia Fang, Justin E Bekelman, Marilyn M Schapira
<p><p>BackgroundMachine learning (ML) algorithms may improve the prognosis for serious illnesses such as cancer, identifying patients who may benefit from earlier palliative care (PC) or advance care planning (ACP). We evaluated the impact of various presentation strategies of a hypothetical ML algorithm on clinician prognostic accuracy and decision making.MethodsThis was a randomized clinical vignette survey study among medical oncologists who treat metastatic non-small-cell lung cancer (mNSCLC). Between March and June 2023, clinicians were shown 3 vignettes of patients presenting with mNSCLC. The vignettes varied by prognostic risk, as defined from the Lung Cancer Prognostic Index (LCPI). Clinicians estimated life expectancy in months and made recommendations about PC and ACP. Clinicians were then shown the same vignette with a hypothetical survival estimate from a black-box ML algorithm; clinicians were randomized to receive the ML prediction using absolute and/or reference-dependent prognostic estimates. The primary outcome was prognostic accuracy relative to the LCPI.ResultsAmong 51 clinicians with complete responses, the median years in practice was 7 (interquartile range 3.5-19), 14 (27.5%) were female, 23 (45.1%) practiced in a community oncology setting, and baseline accuracy was 54.9% (95% confidence interval [CI] 47.0-62.8) across all vignettes. ML presentation improved accuracy (mean change relative to baseline 20.9%, 95% CI 13.9-27.9, <i>P</i> < 0.001). ML outputs using an absolute presentation strategy alone (mean change 27.4%, 95% 16.8-38.1, <i>P</i> < 0.001) or with reference dependence (mean change 33.4%, 95% 23.9-42.8, <i>P</i> < 0.001) improved accuracy, but reference dependence alone did not (mean change 2.0% [95% CI -11.1 to 15.0], <i>P</i> = 0.77). ML presentation did not change the rates of recommending ACP nor PC referral (mean change 1.3% and 0.7%, respectively).LimitationsThe singular use case of prognosis in mNSCLC, low initial response rate.ConclusionsML-based assessments may improve prognostic accuracy but not result in changed decision making.ImplicationsML prognostic algorithms prioritizing explainability and absolute prognoses may have greater impact on clinician decision making.Trial Registration: CT.gov: NCT06463977HighlightsWhile machine learning (ML) algorithms may accurately predict mortality, the impact of prognostic ML on clinicians' prognostic accuracy and decision making and optimal presentation strategies for ML outputs are unclear.In this multicenter randomized survey study among vignettes of patients with advanced cancer, prognostic accuracy improved by 20.9% when clinicians reviewed vignettes with a hypothetical ML mortality risk prediction, with absolute risk presentation strategies resulting in greater accuracy gains than reference-dependent presentations alone.However, ML presentation did not change the rates of recommending advance care planning or palliative care referral (1.3% and 0.7%, respectiv
机器学习(ML)算法可以改善癌症等严重疾病的预后,识别可能受益于早期姑息治疗(PC)或提前护理计划(ACP)的患者。我们评估了假设ML算法的各种呈现策略对临床医生预后准确性和决策的影响。方法:这是一项随机临床调查研究,研究对象是治疗转移性非小细胞肺癌(mNSCLC)的内科肿瘤学家。在2023年3月至6月期间,临床医生展示了3个小片段的小细胞肺癌患者。根据肺癌预后指数(LCPI)的定义,不同患者的预后风险不同。临床医生以月为单位估计预期寿命,并对PC和ACP提出建议。然后向临床医生展示相同的小插曲,并根据黑盒ML算法进行假设的生存估计;临床医生随机接受使用绝对和/或参考依赖预后估计的ML预测。主要结果是相对于LCPI的预后准确性。结果在51名完全缓解的临床医生中,实践的中位数为7年(四分位数范围为3.5-19),14名(27.5%)为女性,23名(45.1%)在社区肿瘤学环境中实践,基线准确性为54.9%(95%置信区间[CI] 47.0-62.8)。ML表现提高了准确性(相对于基线的平均变化20.9%,95% CI 13.9-27.9, P P P P = 0.77)。ML表现没有改变ACP和PC推荐率(平均变化分别为1.3%和0.7%)。局限:在小细胞肺癌中预后的单一用例,初始缓解率低。结论基于sml的评估可提高预后准确性,但不会导致决策改变。结论:优先考虑可解释性和绝对预后的sml预测算法可能对临床医生的决策有更大的影响。虽然机器学习(ML)算法可以准确地预测死亡率,但预后ML对临床医生的预后准确性和决策制定以及ML输出的最佳呈现策略的影响尚不清楚。在这项针对晚期癌症患者的多中心随机调查研究中,当临床医生使用假设的ML死亡风险预测来评估小样本时,预后准确性提高了20.9%,绝对风险表现策略比单独参考依赖表现获得更高的准确性。然而,ML表现并没有改变推荐提前护理计划或姑息治疗转诊的比率(分别为1.3%和0.7%)。无解释的基于ml的预后评估可提高预后准确性,但不会改变有关姑息治疗转诊或预先护理计划的决定。
{"title":"The Impact of Machine Learning Mortality Risk Prediction on Clinician Prognostic Accuracy and Decision Support: A Randomized Vignette Study.","authors":"Ravi B Parikh, William J Ferrell, Anthony Girard, Jenna White, Sophia Fang, Justin E Bekelman, Marilyn M Schapira","doi":"10.1177/0272989X251349489","DOIUrl":"10.1177/0272989X251349489","url":null,"abstract":"<p><p>BackgroundMachine learning (ML) algorithms may improve the prognosis for serious illnesses such as cancer, identifying patients who may benefit from earlier palliative care (PC) or advance care planning (ACP). We evaluated the impact of various presentation strategies of a hypothetical ML algorithm on clinician prognostic accuracy and decision making.MethodsThis was a randomized clinical vignette survey study among medical oncologists who treat metastatic non-small-cell lung cancer (mNSCLC). Between March and June 2023, clinicians were shown 3 vignettes of patients presenting with mNSCLC. The vignettes varied by prognostic risk, as defined from the Lung Cancer Prognostic Index (LCPI). Clinicians estimated life expectancy in months and made recommendations about PC and ACP. Clinicians were then shown the same vignette with a hypothetical survival estimate from a black-box ML algorithm; clinicians were randomized to receive the ML prediction using absolute and/or reference-dependent prognostic estimates. The primary outcome was prognostic accuracy relative to the LCPI.ResultsAmong 51 clinicians with complete responses, the median years in practice was 7 (interquartile range 3.5-19), 14 (27.5%) were female, 23 (45.1%) practiced in a community oncology setting, and baseline accuracy was 54.9% (95% confidence interval [CI] 47.0-62.8) across all vignettes. ML presentation improved accuracy (mean change relative to baseline 20.9%, 95% CI 13.9-27.9, <i>P</i> < 0.001). ML outputs using an absolute presentation strategy alone (mean change 27.4%, 95% 16.8-38.1, <i>P</i> < 0.001) or with reference dependence (mean change 33.4%, 95% 23.9-42.8, <i>P</i> < 0.001) improved accuracy, but reference dependence alone did not (mean change 2.0% [95% CI -11.1 to 15.0], <i>P</i> = 0.77). ML presentation did not change the rates of recommending ACP nor PC referral (mean change 1.3% and 0.7%, respectively).LimitationsThe singular use case of prognosis in mNSCLC, low initial response rate.ConclusionsML-based assessments may improve prognostic accuracy but not result in changed decision making.ImplicationsML prognostic algorithms prioritizing explainability and absolute prognoses may have greater impact on clinician decision making.Trial Registration: CT.gov: NCT06463977HighlightsWhile machine learning (ML) algorithms may accurately predict mortality, the impact of prognostic ML on clinicians' prognostic accuracy and decision making and optimal presentation strategies for ML outputs are unclear.In this multicenter randomized survey study among vignettes of patients with advanced cancer, prognostic accuracy improved by 20.9% when clinicians reviewed vignettes with a hypothetical ML mortality risk prediction, with absolute risk presentation strategies resulting in greater accuracy gains than reference-dependent presentations alone.However, ML presentation did not change the rates of recommending advance care planning or palliative care referral (1.3% and 0.7%, respectiv","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"690-702"},"PeriodicalIF":3.1,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12233153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}