Requiem for odds ratios

IF 3.1 2区 医学 Q2 HEALTH CARE SCIENCES & SERVICES Health Services Research Pub Date : 2024-06-01 DOI:10.1111/1475-6773.14337
Edward C. Norton PhD, Bryan E. Dowd PhD, Melissa M. Garrido PhD, Matthew L. Maciejewski PhD
{"title":"Requiem for odds ratios","authors":"Edward C. Norton PhD,&nbsp;Bryan E. Dowd PhD,&nbsp;Melissa M. Garrido PhD,&nbsp;Matthew L. Maciejewski PhD","doi":"10.1111/1475-6773.14337","DOIUrl":null,"url":null,"abstract":"<p><i>Health Services Research</i> encourages authors to report marginal effects instead of odds ratios for logistic regression with a binary outcome. Specifically, in the instructions for authors, Manuscript Formatting and Submission Requirements, section 2.4.2.2 Structured abstract and keywords, it reads “Reporting of odds ratios is discouraged (marginal effects preferred) except in case-control studies” (see the <i>HSR</i> website https://www.hsr.org/authors/manuscript-formatting-submission-requirements).</p><p>We applaud this decision. We also encourage other journals to make the same decision. It is time to end the reporting of odds ratios in the scientific literature for most research studies, except for case–control studies with matched samples.</p><p><i>HSR</i>'s decision is due to increasing recognition that odds ratios are not only confusing to non-researchers,<span><sup>1, 2</sup></span> but that researchers themselves often misinterpret them.<span><sup>3, 4</sup></span> Odds ratios are also of limited utility in meta-analyses. Marginal effects, which represent the difference in the probability of a binary outcome between comparison groups, are more straightforward to interpret and compare. Below, we illustrate the difficulties in interpreting odds ratios, outline the conditions that must be met for odds ratios to be compared directly, and explain how marginal effects overcome these difficulties.</p><p>Consider a hypothetical prospective cohort study of whether a new hospital-based discharge program affects the 30-day readmission rate, a binary outcome, observed for each patient who is discharged alive. The program's goal is to help eligible patients avoid unnecessary readmissions, and patients are randomized into participating in the program or not. Suppose that a carefully designed study estimates the logistic regression coefficient (the log odds) on the discharge program to be <span></span><math>\n <mrow>\n <mo>−</mo>\n <mn>0.2</mn>\n </mrow></math>, indicating that readmission rates are lower for patients who participate in the discharge program than patients who do not. When writing about the results, the researcher must decide how to report the magnitude of the change and has several choices for how to do so.</p><p>One option is to report the odds ratio, which in this case is <span></span><math>\n <mrow>\n <mn>0.82</mn>\n <mo>=</mo>\n <mi>exp</mi>\n <mfenced>\n <mrow>\n <mo>−</mo>\n <mn>0.2</mn>\n </mrow>\n </mfenced>\n </mrow></math>, and then compare it with other published odds ratios in the literature. However, this estimated odds ratio of 0.82 depends on an unobservable scaling factor that makes its interpretation conditional on the data and on the model specification.<span><sup>3, 5</sup></span> As odds ratios are scaled by different unobservable factors and are conditional on different model specifications, the estimated odds ratio cannot be compared with any other odds ratio.<span><sup>6, 7</sup></span> Even within a single study, odds ratios based on models including different sets of covariates cannot be compared. It would be more accurate to report that, “The estimated odds ratio is 0.82, conditional on the covariates included in the regression, but a different odds ratio would be found if the model included a different set of explanatory variables.” Due to an unobserved scaling factor that is included in every estimated odds ratio, odds ratios are not generalizable.</p><p>Odds ratios from different covariate specifications within the same study or between different studies can almost never be compared directly. The explanation for this requires an understanding of how logistic regression differs from linear regression.<span><sup>3</sup></span> In least squares regression, adding covariates that predict the outcome—but are independent of other covariates (and are therefore not mediators or confounders)—does not change either the estimated parameters or the marginal effects. Adding more independent covariates to a linear regression just reduces the amount of unexplained variation, which reduces the error variance (<span></span><math>\n <mrow>\n <msup>\n <mi>σ</mi>\n <mn>2</mn>\n </msup>\n </mrow></math>), and results in smaller standard errors for each parameter or marginal effect because of improved precision. For example, in a perfectly executed randomized controlled trial (RCT), the assignment to treatment is independent of all covariates, and the covariates are balanced in the treatment and comparison groups. In a perfectly executed RCT, the estimated treatment effect from a least squares regression should be the same whether covariates are included or not. The only difference in the estimated treatment effect with or without covariate adjustment is the standard errors. Including covariates corrects for any imbalance in the covariates resulting from sampling variation. Adding covariates thus improves statistical significance while leaving the expected value of the estimated treatment effects unchanged.</p><p>This result does not carry over to logistic regression (or to probit regression). In contrast to linear regression applied to the RCT, adding covariates will change the estimated coefficients in a logistic regression of a binary outcome from the same RCT, even when those added covariates are not confounders.<span><sup>3-7</sup></span> Therefore, the estimated odds ratios also change unlike the linear regression where the estimated coefficients do not change. The reason that the odds ratios change is because the estimated coefficients in a logistic regression are scaled by an arbitrary factor equal to the square root of the variance of the unexplained part of binary outcome, or <span></span><math>\n <mrow>\n <mi>σ</mi>\n </mrow></math>. That is, logistic regressions estimate <span></span><math>\n <mrow>\n <mi>β</mi>\n <mo>/</mo>\n <mi>σ</mi>\n </mrow></math>, not <span></span><math>\n <mrow>\n <mi>β</mi>\n </mrow></math> (for the full mathematical derivation, see Norton and Dowd<span><sup>3</sup></span>). Furthermore and more problematic, <span></span><math>\n <mrow>\n <mi>σ</mi>\n </mrow></math> is unknown to the researcher.</p><p>Because the estimated coefficients in a logistic regression are scaled by an arbitrary factor <span></span><math>\n <mrow>\n <mi>σ</mi>\n </mrow></math>, the odds ratios are also scaled by an arbitrary factor (odds ratio = <span></span><math>\n <mrow>\n <mi>exp</mi>\n <mfenced>\n <mrow>\n <mi>β</mi>\n <mo>/</mo>\n <mi>σ</mi>\n </mrow>\n </mfenced>\n </mrow></math>). Ideally, this arbitrary scaling factor <span></span><math>\n <mrow>\n <mi>σ</mi>\n </mrow></math> would be invariant to changes in covariate specification, but it is not. In fact, this scaling factor changes when more explanatory variables are added to the logistic regression model, because the added variables explain more of the total variation and reduce the unexplained variance and reduce <span></span><math>\n <mrow>\n <mi>σ</mi>\n </mrow></math>. Therefore, adding more independent explanatory variables to the model will increase the odds ratio of the variable of interest (e.g., treatment) due to dividing by a smaller scaling factor (<i>σ</i>), which does not occur when representing the strength of association via relative risks or absolute risks.</p><p>In the same perfectly executed RCT, including additional covariates to a logistic regression on a binary outcome would change the magnitude of the estimated treatment effect (log odds, <span></span><math>\n <mrow>\n <mi>β</mi>\n <mo>/</mo>\n <mi>σ</mi>\n </mrow></math>) and the corresponding odds ratio (<span></span><math>\n <mrow>\n <mi>exp</mi>\n <mfenced>\n <mrow>\n <mi>β</mi>\n <mo>/</mo>\n <mi>σ</mi>\n </mrow>\n </mfenced>\n </mrow></math>). As a result, the interpretation of the odds ratio depends on the covariates included in the model. A comparison of ORs from prior literature is not meaningful if either the covariate specification is different or if the sample is different because the unknown <span></span><math>\n <mrow>\n <mi>σ</mi>\n </mrow></math> is different for each study.</p><p>In the readmission example above, a clearer option would be to report marginal effects in terms of a percentage point change in the probability of readmission, along with the base readmission rate for context.<span><sup>8</sup></span></p><p>In health services research, the most common way of reporting marginal effects is through average marginal effects—the average of the marginal effects computed for each person. These are interpreted as the mean percentage point difference—<i>not</i> the percent difference—in outcome probabilities that accompany a change in the treatment variable's value. For binary treatments, an alternative is to present the predicted probabilities of the outcome when the treatment variable equals 0 and 1.</p><p>Marginal effects are much less sensitive to the unknown scaling factor and exhibit little change when independent covariates are added to the logistic regression model. When averaged, many of these small changes cancel out.<span><sup>3</sup></span> The magnitude of average marginal effects can be compared across different studies, whereas the magnitude of odds ratios cannot. For this reason, marginal effects are preferable to report from logistic regression from RCTs and nonrandomized studies.</p><p>By extension from odds ratios not being comparable across studies due to unknown scaling factors being different, they have limited utility in systematic reviews and meta-analyses. Marginal effects overcome these difficulties.</p><p>Similarly, marginal effects are preferable to odds ratios or coefficients when using logistic regression to generate predictive models that will be applied to other populations. The magnitude of the unknown scaling factor in odds ratios or log odds will differ across populations, limiting the generalizability of a predictive model to a population other than the one in which it is tested and trained.</p><p>The choice of how to report results from a logistic regression is important because logistic regression is one of the most common statistical tools in the health services research toolkit. It is also important that researchers—especially researchers who study public policies and quality of care—communicate their results and conclusions clearly to other researchers, policymakers, and the public. Therefore, <i>HSR</i>'s stand on odds ratios will help improve interpretation and communication.</p><p>We commend <i>Health Services Research</i> for deciding to discourage the reporting of odds ratios in most studies. We agree wholeheartedly with this decision, which keeps <i>Health Services Research</i> at the forefront of best practices.</p><p>Dr. Maciejewski was also supported by a Research Career Scientist award from the Department of Veterans Affairs (RCS 10-391).</p>","PeriodicalId":55065,"journal":{"name":"Health Services Research","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1475-6773.14337","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Services Research","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/1475-6773.14337","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Health Services Research encourages authors to report marginal effects instead of odds ratios for logistic regression with a binary outcome. Specifically, in the instructions for authors, Manuscript Formatting and Submission Requirements, section 2.4.2.2 Structured abstract and keywords, it reads “Reporting of odds ratios is discouraged (marginal effects preferred) except in case-control studies” (see the HSR website https://www.hsr.org/authors/manuscript-formatting-submission-requirements).

We applaud this decision. We also encourage other journals to make the same decision. It is time to end the reporting of odds ratios in the scientific literature for most research studies, except for case–control studies with matched samples.

HSR's decision is due to increasing recognition that odds ratios are not only confusing to non-researchers,1, 2 but that researchers themselves often misinterpret them.3, 4 Odds ratios are also of limited utility in meta-analyses. Marginal effects, which represent the difference in the probability of a binary outcome between comparison groups, are more straightforward to interpret and compare. Below, we illustrate the difficulties in interpreting odds ratios, outline the conditions that must be met for odds ratios to be compared directly, and explain how marginal effects overcome these difficulties.

Consider a hypothetical prospective cohort study of whether a new hospital-based discharge program affects the 30-day readmission rate, a binary outcome, observed for each patient who is discharged alive. The program's goal is to help eligible patients avoid unnecessary readmissions, and patients are randomized into participating in the program or not. Suppose that a carefully designed study estimates the logistic regression coefficient (the log odds) on the discharge program to be 0.2 , indicating that readmission rates are lower for patients who participate in the discharge program than patients who do not. When writing about the results, the researcher must decide how to report the magnitude of the change and has several choices for how to do so.

One option is to report the odds ratio, which in this case is 0.82 = exp 0.2 , and then compare it with other published odds ratios in the literature. However, this estimated odds ratio of 0.82 depends on an unobservable scaling factor that makes its interpretation conditional on the data and on the model specification.3, 5 As odds ratios are scaled by different unobservable factors and are conditional on different model specifications, the estimated odds ratio cannot be compared with any other odds ratio.6, 7 Even within a single study, odds ratios based on models including different sets of covariates cannot be compared. It would be more accurate to report that, “The estimated odds ratio is 0.82, conditional on the covariates included in the regression, but a different odds ratio would be found if the model included a different set of explanatory variables.” Due to an unobserved scaling factor that is included in every estimated odds ratio, odds ratios are not generalizable.

Odds ratios from different covariate specifications within the same study or between different studies can almost never be compared directly. The explanation for this requires an understanding of how logistic regression differs from linear regression.3 In least squares regression, adding covariates that predict the outcome—but are independent of other covariates (and are therefore not mediators or confounders)—does not change either the estimated parameters or the marginal effects. Adding more independent covariates to a linear regression just reduces the amount of unexplained variation, which reduces the error variance ( σ 2 ), and results in smaller standard errors for each parameter or marginal effect because of improved precision. For example, in a perfectly executed randomized controlled trial (RCT), the assignment to treatment is independent of all covariates, and the covariates are balanced in the treatment and comparison groups. In a perfectly executed RCT, the estimated treatment effect from a least squares regression should be the same whether covariates are included or not. The only difference in the estimated treatment effect with or without covariate adjustment is the standard errors. Including covariates corrects for any imbalance in the covariates resulting from sampling variation. Adding covariates thus improves statistical significance while leaving the expected value of the estimated treatment effects unchanged.

This result does not carry over to logistic regression (or to probit regression). In contrast to linear regression applied to the RCT, adding covariates will change the estimated coefficients in a logistic regression of a binary outcome from the same RCT, even when those added covariates are not confounders.3-7 Therefore, the estimated odds ratios also change unlike the linear regression where the estimated coefficients do not change. The reason that the odds ratios change is because the estimated coefficients in a logistic regression are scaled by an arbitrary factor equal to the square root of the variance of the unexplained part of binary outcome, or σ . That is, logistic regressions estimate β / σ , not β (for the full mathematical derivation, see Norton and Dowd3). Furthermore and more problematic, σ is unknown to the researcher.

Because the estimated coefficients in a logistic regression are scaled by an arbitrary factor σ , the odds ratios are also scaled by an arbitrary factor (odds ratio = exp β / σ ). Ideally, this arbitrary scaling factor σ would be invariant to changes in covariate specification, but it is not. In fact, this scaling factor changes when more explanatory variables are added to the logistic regression model, because the added variables explain more of the total variation and reduce the unexplained variance and reduce σ . Therefore, adding more independent explanatory variables to the model will increase the odds ratio of the variable of interest (e.g., treatment) due to dividing by a smaller scaling factor (σ), which does not occur when representing the strength of association via relative risks or absolute risks.

In the same perfectly executed RCT, including additional covariates to a logistic regression on a binary outcome would change the magnitude of the estimated treatment effect (log odds, β / σ ) and the corresponding odds ratio ( exp β / σ ). As a result, the interpretation of the odds ratio depends on the covariates included in the model. A comparison of ORs from prior literature is not meaningful if either the covariate specification is different or if the sample is different because the unknown σ is different for each study.

In the readmission example above, a clearer option would be to report marginal effects in terms of a percentage point change in the probability of readmission, along with the base readmission rate for context.8

In health services research, the most common way of reporting marginal effects is through average marginal effects—the average of the marginal effects computed for each person. These are interpreted as the mean percentage point difference—not the percent difference—in outcome probabilities that accompany a change in the treatment variable's value. For binary treatments, an alternative is to present the predicted probabilities of the outcome when the treatment variable equals 0 and 1.

Marginal effects are much less sensitive to the unknown scaling factor and exhibit little change when independent covariates are added to the logistic regression model. When averaged, many of these small changes cancel out.3 The magnitude of average marginal effects can be compared across different studies, whereas the magnitude of odds ratios cannot. For this reason, marginal effects are preferable to report from logistic regression from RCTs and nonrandomized studies.

By extension from odds ratios not being comparable across studies due to unknown scaling factors being different, they have limited utility in systematic reviews and meta-analyses. Marginal effects overcome these difficulties.

Similarly, marginal effects are preferable to odds ratios or coefficients when using logistic regression to generate predictive models that will be applied to other populations. The magnitude of the unknown scaling factor in odds ratios or log odds will differ across populations, limiting the generalizability of a predictive model to a population other than the one in which it is tested and trained.

The choice of how to report results from a logistic regression is important because logistic regression is one of the most common statistical tools in the health services research toolkit. It is also important that researchers—especially researchers who study public policies and quality of care—communicate their results and conclusions clearly to other researchers, policymakers, and the public. Therefore, HSR's stand on odds ratios will help improve interpretation and communication.

We commend Health Services Research for deciding to discourage the reporting of odds ratios in most studies. We agree wholeheartedly with this decision, which keeps Health Services Research at the forefront of best practices.

Dr. Maciejewski was also supported by a Research Career Scientist award from the Department of Veterans Affairs (RCS 10-391).

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
几率安魂曲
对于二元结果的逻辑回归,《健康服务研究》鼓励作者报告边际效应而不是几率比率。具体来说,在作者须知《稿件格式和投稿要求》的第 2.4.2.2 节《结构化摘要和关键词》中写道:"不鼓励报告几率比(首选边际效应),病例对照研究除外"(见《健康服务研究》网站 https://www.hsr.org/authors/manuscript-formatting-submission-requirements)。我们对这一决定表示赞赏。我们也鼓励其他期刊做出同样的决定。现在是时候停止在科学文献中报告大多数研究的几率比例了,有匹配样本的病例对照研究除外。HSR 做出这一决定是因为越来越多的人认识到,几率比例不仅会让非研究人员感到困惑,1, 2 而且研究人员自己也经常误解几率比例。边际效应代表比较组之间二元结果发生概率的差异,更易于解释和比较。下面,我们将说明解释几率比的困难,概述直接比较几率比必须满足的条件,并解释边际效应是如何克服这些困难的。考虑一项假设的前瞻性队列研究,研究基于医院的新出院计划是否会影响 30 天再入院率(二元结果),观察每个活着出院的病人。该计划的目标是帮助符合条件的患者避免不必要的再入院,患者被随机分配是否参与该计划。假设一项精心设计的研究估计出院计划的逻辑回归系数(对数赔率)为-0.2,表明参加出院计划的患者的再入院率低于未参加的患者。在撰写有关结果的文章时,研究人员必须决定如何报告变化的幅度,并有几种方法可供选择。一种方法是报告几率比例,本例中的几率比例为 0.82 = exp - 0.2,然后将其与文献中公布的其他几率比例进行比较。然而,0.82 这一估计的几率比取决于一个不可观测的比例因子,这使得其解释取决于数据和模型规格。3, 5 由于几率比被不同的不可观测因子比例化,并且取决于不同的模型规格,因此估计的几率比不能与任何其他几率比进行比较。更准确的说法是:"以回归中包含的协变量为条件,估计的几率比为 0.82,但如果模型中包含不同的解释变量集,则会发现不同的几率比"。由于每个估计的几率比都包含一个未观察到的缩放因子,因此几率比不具有普遍性。同一研究或不同研究中不同协变量规格的几率比几乎无法直接比较。3 在最小二乘回归中,增加预测结果的协变量--但独立于其他协变量(因此不是中介或混杂因素)--既不会改变估计参数,也不会改变边际效应。在线性回归中加入更多的独立协变量只会减少无法解释的变异量,从而减少误差方差(σ 2),并由于精度的提高而使每个参数或边际效应的标准误差变小。例如,在完美执行的随机对照试验(RCT)中,治疗分配与所有协变量无关,并且治疗组和对比组的协变量是平衡的。在一项完美执行的随机对照试验中,无论是否包含协变量,通过最小二乘法回归得出的估计治疗效果应该是相同的。进行或不进行协变量调整后,估计治疗效果的唯一区别在于标准误差。加入协变量可纠正因抽样差异造成的协变量不平衡。因此,在保持估计治疗效果预期值不变的情况下,加入协变量可提高统计显著性。 这一结果在逻辑回归(或 probit 回归)中并不适用。与应用于 RCT 的线性回归不同,添加协变量会改变同一 RCT 二元结果的逻辑回归估计系数,即使添加的协变量不是混杂因素。几率比之所以会发生变化,是因为逻辑回归中的估计系数被一个任意因子缩放,该因子等于二元结果中无法解释部分方差的平方根,即 σ。也就是说,逻辑回归估计的是 β / σ,而不是 β(完整的数学推导见 Norton 和 Dowd3)。此外,更麻烦的是,σ 对研究者来说是未知的。由于逻辑回归中的估计系数按任意系数 σ 缩放,因此几率比也按任意系数缩放(几率比 = exp β / σ)。理想情况下,这个任意缩放因子 σ 与协变量规格的变化无关,但事实并非如此。事实上,当逻辑回归模型中加入更多解释变量时,这个比例因子就会发生变化,因为加入的变量可以解释更多的总变异,减少未解释变异,从而降低 σ 。因此,在模型中加入更多的独立解释变量会增加相关变量(如治疗)的几率比,因为除以较小的比例因子(σ),而通过相对风险或绝对风险表示关联强度时不会出现这种情况。在同样完美执行的 RCT 中,在二元结果的逻辑回归中加入额外的协变量会改变估计治疗效果的大小(对数几率,β / σ)和相应的几率比(exp β / σ)。因此,几率比的解释取决于模型中包含的协变量。在上文的再入院例子中,一个更清晰的选择是以再入院概率的百分点变化来报告边际效应,同时报告基本的再入院率。8 在医疗服务研究中,报告边际效应最常用的方法是平均边际效应--为每个人计算的边际效应的平均值。这些边际效应被解释为伴随治疗变量值变化而产生的结果概率的平均百分点差异,而不是百分比差异。边际效应对未知比例因子的敏感度要低得多,而且在逻辑回归模型中加入独立协变量时,边际效应几乎不会发生变化。3 平均边际效应的大小可以在不同的研究中进行比较,而几率比的大小则无法比较。3 平均边际效应的大小可以在不同的研究中进行比较,而几率比的大小则无法比较。因此,边际效应更适合于从 RCT 和非随机研究的逻辑回归中得出的报告。由于未知的比例因素不同,几率比无法在不同的研究中进行比较,因此边际效应在系统综述和荟萃分析中的作用有限。同样,在使用逻辑回归生成预测模型并应用于其他人群时,边际效应比几率比率或系数更可取。在不同的人群中,几率比或对数几率中未知的比例因子的大小会有所不同,这就限制了预测模型在测试和训练对象以外的人群中的通用性。 选择如何报告逻辑回归的结果非常重要,因为逻辑回归是医疗服务研究工具包中最常用的统计工具之一。同样重要的是,研究人员--尤其是研究公共政策和医疗质量的研究人员--要向其他研究人员、政策制定者和公众清楚地传达他们的结果和结论。因此,健康服务研究中心对几率比的立场将有助于改善解释和交流。我们赞扬健康服务研究中心决定在大多数研究中不鼓励报告几率比。我们衷心赞同这一决定,它使健康服务研究站在了最佳实践的前沿。Maciejewski 博士还获得了退伍军人事务部颁发的研究职业科学家奖(RCS 10-391)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Health Services Research
Health Services Research 医学-卫生保健
CiteScore
4.80
自引率
5.90%
发文量
193
审稿时长
4-8 weeks
期刊介绍: Health Services Research (HSR) is a peer-reviewed scholarly journal that provides researchers and public and private policymakers with the latest research findings, methods, and concepts related to the financing, organization, delivery, evaluation, and outcomes of health services. Rated as one of the top journals in the fields of health policy and services and health care administration, HSR publishes outstanding articles reporting the findings of original investigations that expand knowledge and understanding of the wide-ranging field of health care and that will help to improve the health of individuals and communities.
期刊最新文献
Quality improvement lessons learned from National Implementation of the "Patient Safety Events in Community Care: Reporting, Investigation, and Improvement Guidebook". Connecting unstably housed veterans living in rural areas to health care: Perspectives from Health Care Navigators. A structured approach to modifying an implementation package while scaling up a complex evidence-based practice. Evaluation of regional variation in racial and ethnic differences in patient experience among Veterans Health Administration primary care users. Effect of mental health staffing inputs on initiation of care among recently separated Veterans.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1