Ana L Terry, Michele M Chiappa, Juliana McAllister, David A Woodwell, Jessica E Graber
The continuous National Health and Nutrition Examination Survey began data collection in 1999 and proceeded without interruption until operations were suspended in March 2020 in response to the COVID-19 pandemic. Once the Division of Health and Nutrition Examination Surveys was able to determine and resume safe field operations, the next survey cycle was conducted between August 2021 and August 2023. This report describes the survey content, procedures, and methodologies implemented in the August 2021-August 2023 National Health and Nutrition Examination Survey cycle.
{"title":"Plan and Operations of the National Health and Nutrition Examination Survey, August 2021-August 2023.","authors":"Ana L Terry, Michele M Chiappa, Juliana McAllister, David A Woodwell, Jessica E Graber","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The continuous National Health and Nutrition Examination Survey began data collection in 1999 and proceeded without interruption until operations were suspended in March 2020 in response to the COVID-19 pandemic. Once the Division of Health and Nutrition Examination Surveys was able to determine and resume safe field operations, the next survey cycle was conducted between August 2021 and August 2023. This report describes the survey content, procedures, and methodologies implemented in the August 2021-August 2023 National Health and Nutrition Examination Survey cycle.</p>","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":" 66","pages":"1-21"},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangyu Zhang, Yulei He, Van Parsons, Chris Moriarity, Stephen J Blumberg, Benjamin Zablotsky, Aaron Maitland, Matthew D Bramlett, Jonaki Bose
The National Health Interview Survey (NHIS), conducted by the National Center for Health Statistics since 1957, is the principal source of information on the health of the U.S. civilian noninstitutionalized population. NHIS selects one adult (Sample Adult) and, when applicable, one child (Sample Child) randomly within a family (through 2018) or a household (2019 and forward). Sampling weights for the separate analysis of data from Sample Adults and Sample Children are provided annually by the National Center for Health Statistics. A growing interest in analysis of parent-child pair data using NHIS has been observed, which necessitated the development of appropriate analytic weights. Objective This report explains how dyad weights were created such that data users can analyze NHIS data from both Sample Children and their mothers or fathers, respectively. Methods Using data from the 2019 NHIS, adult-child pair-level sampling weights were developed by combining each pair's conditional selection probability with their household-level sampling weight. The calculated pair weights were then adjusted for pair-level nonresponse, and large sampling weights were trimmed at the 99th percentile of the derived sampling weights. Examples of analyzing parent-child pair data by means of domain estimation methods (that is, statistical analysis for subpopulations or subgroups) are included in this report. Conclusions The National Center for Health Statistics has created dyad or pair weights that can be used for studies using parent-child pairs in NHIS. This method could potentially be adapted to other surveys with similar sampling design and statistical needs.
美国国家卫生统计中心(National Center for Health Statistics)自 1957 年起开展的全国健康访谈调查(National Health Interview Survey,NHIS)是有关美国非住院平民健康状况的主要信息来源。NHIS 在一个家庭(至 2018 年)或一个住户(2019 年及以后)中随机抽取一名成人(成人样本),并在适用情况下抽取一名儿童(儿童样本)。国家卫生统计中心每年都会提供用于分别分析成人样本和儿童样本数据的抽样权重。人们对使用 NHIS 分析亲子配对数据的兴趣日益浓厚,因此有必要制定适当的分析权重。本报告解释了如何创建配对权重,以便数据用户能够分析样本儿童及其母亲或父亲的 NHIS 数据。方法 利用 2019 年 NHIS 的数据,通过将每对样本的条件选择概率与其家庭层面的抽样权重相结合,建立成人-儿童配对层面的抽样权重。然后对计算出的配对权重进行配对级非响应调整,并在得出的抽样权重的第 99 个百分位数处对大抽样权重进行修剪。本报告中包含了通过领域估计方法(即针对亚群或分组的统计分析)分析亲子配对数据的示例。结论 美国国家卫生统计中心已建立了配对权重,可用于使用 NHIS 中的亲子配对数据进行研究。这种方法有可能适用于具有类似抽样设计和统计需求的其他调查。
{"title":"Developing Sampling Weights for Statistical Analysis of Parent-Child Pair Data From the National Health Interview Survey.","authors":"Guangyu Zhang, Yulei He, Van Parsons, Chris Moriarity, Stephen J Blumberg, Benjamin Zablotsky, Aaron Maitland, Matthew D Bramlett, Jonaki Bose","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The National Health Interview Survey (NHIS), conducted by the National Center for Health Statistics since 1957, is the principal source of information on the health of the U.S. civilian noninstitutionalized population. NHIS selects one adult (Sample Adult) and, when applicable, one child (Sample Child) randomly within a family (through 2018) or a household (2019 and forward). Sampling weights for the separate analysis of data from Sample Adults and Sample Children are provided annually by the National Center for Health Statistics. A growing interest in analysis of parent-child pair data using NHIS has been observed, which necessitated the development of appropriate analytic weights. Objective This report explains how dyad weights were created such that data users can analyze NHIS data from both Sample Children and their mothers or fathers, respectively. Methods Using data from the 2019 NHIS, adult-child pair-level sampling weights were developed by combining each pair's conditional selection probability with their household-level sampling weight. The calculated pair weights were then adjusted for pair-level nonresponse, and large sampling weights were trimmed at the 99th percentile of the derived sampling weights. Examples of analyzing parent-child pair data by means of domain estimation methods (that is, statistical analysis for subpopulations or subgroups) are included in this report. Conclusions The National Center for Health Statistics has created dyad or pair weights that can be used for studies using parent-child pairs in NHIS. This method could potentially be adapted to other surveys with similar sampling design and statistical needs.</p>","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":" 207","pages":"1-31"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140868430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Chuang, Jennifer Rammon, Hee-Choon Shin, Te-Ching Chen
Background and objectives Laboratory tests conducted on survey respondents' biological specimens are a major component of the National Health and Nutrition Examination Survey. The National Center for Health Statistics' Division of Health and Nutrition Examination Surveys performs internal analytic method validation studies whenever laboratories undergo instrumental or methodological changes, or when contract laboratories change. These studies assess agreement between methods to evaluate how methodological changes could affect data inference or compromise consistency of measurements across survey cycles. When systematic differences between methods are observed, adjustment equations are released with the data documentation for analysts planning to combine survey cycles or conduct a trend analysis. Adjustment equations help ensure that observed differences from methodological changes are not misinterpreted as population changes. This report assesses the reliability of statistical methods used by the Division of Health and Nutrition Examination Surveys when conducting method validation studies to address concerns that adjustment equations are being overproduced (recommended too frequently). Methods Public-use 2017-2018 National Health and Nutrition Examination Survey laboratory data were used to simulate "new" measurements for 120 analytic method validation studies. Blinded studies were analyzed to determine the final adjustment recommendation for each study using difference plots, descriptive statistics, t-tests, and Deming regressions. Final recommendations were compared with simulated difference types to assess how often spurious results were observed. Concordance estimates (concordance, misclassification, sensitivity, specificity, and positive and negative predictive values) informed assessments. Results Adjustment equations were appropriately recommended for 75.0% of the studies, over-recommended for 5.8%, under-recommended for 15.8%, and recommended with an inappropriate technique for 3.3%. Across simulated difference types, sensitivity ranged from 65.9% to 84.4% and specificity from 74.7% to 97.5%. Conclusions Findings from this report suggest that the current methodology used by the Division of Health and Nutrition Examination Surveys performs moderately well. Based on these data and analyses, underadjustment was more prevalent than overadjustment, suggesting that the current methodology is conservative.
{"title":"Assessing Laboratory Method Validations for Informing Inference Across Survey Cycles in the National Health and Nutrition Examination Survey.","authors":"Kevin Chuang, Jennifer Rammon, Hee-Choon Shin, Te-Ching Chen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Background and objectives Laboratory tests conducted on survey respondents' biological specimens are a major component of the National Health and Nutrition Examination Survey. The National Center for Health Statistics' Division of Health and Nutrition Examination Surveys performs internal analytic method validation studies whenever laboratories undergo instrumental or methodological changes, or when contract laboratories change. These studies assess agreement between methods to evaluate how methodological changes could affect data inference or compromise consistency of measurements across survey cycles. When systematic differences between methods are observed, adjustment equations are released with the data documentation for analysts planning to combine survey cycles or conduct a trend analysis. Adjustment equations help ensure that observed differences from methodological changes are not misinterpreted as population changes. This report assesses the reliability of statistical methods used by the Division of Health and Nutrition Examination Surveys when conducting method validation studies to address concerns that adjustment equations are being overproduced (recommended too frequently). Methods Public-use 2017-2018 National Health and Nutrition Examination Survey laboratory data were used to simulate \"new\" measurements for 120 analytic method validation studies. Blinded studies were analyzed to determine the final adjustment recommendation for each study using difference plots, descriptive statistics, t-tests, and Deming regressions. Final recommendations were compared with simulated difference types to assess how often spurious results were observed. Concordance estimates (concordance, misclassification, sensitivity, specificity, and positive and negative predictive values) informed assessments. Results Adjustment equations were appropriately recommended for 75.0% of the studies, over-recommended for 5.8%, under-recommended for 15.8%, and recommended with an inappropriate technique for 3.3%. Across simulated difference types, sensitivity ranged from 65.9% to 84.4% and specificity from 74.7% to 97.5%. Conclusions Findings from this report suggest that the current methodology used by the Division of Health and Nutrition Examination Surveys performs moderately well. Based on these data and analyses, underadjustment was more prevalent than overadjustment, suggesting that the current methodology is conservative.</p>","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":" 206","pages":"1-41"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy M Brown, Donielle G White, Nikki B Adams, Rihem Rihem PharmD, Salah Shaikh, Lello Guluma
Objectives This report documents the results of a validation study conducted to assess the reliability of two algorithms applied to the 2016 National Hospital Care Survey. One algorithm identifies opioid-involved and opioid overdose hospital encounters, and the other identifies encounters with patients that have substance use disorders and selected mental health issues. These algorithms use both medical codes and natural language processing to identify encounters. Methods To validate the algorithms, medical record abstraction was performed on a stratified sample of 900 hospital encounters from the 2016 National Hospital Care Survey. The abstractors recorded their determinations of opioid involvement, opioid overdose, substance use disorder, and mental health issues on a standard form. Abstractors' determinations were compared with algorithm output to assess the overall performance using F-score and Matthews correlation coefficient. The latter provided a secondary measure of performance. The 2016 National Hospital Care Survey data are unweighted and not nationally representative. Results Overall algorithm performance varied by topic and by metric. The opioid-involvement algorithm achieved the highest performance, performing well with an F-score of 0.95, followed by the substance use disorder algorithm (F-score of 0.79), the mental health issues algorithm (F-score of 0.68), and the opioid overdose algorithm (F-score of 0.48). Assessment by Matthews correlation coefficient indicated an overall poorer level of performance, ranging from a high of 0.57 for the mental health issues algorithm to a low of 0.33 for the opioid-involvement algorithm. The causes of false positives and false negatives likewise varied, including both overly broad code and keyword inclusions as well as incompleteness of data submitted to the National Hospital Care Survey. Conclusion The validation study illustrates which aspects of the developed algorithms performed well and which aspects should be altered or discarded in future iterations. It further emphasizes the importance of data completeness, therefore laying the groundwork for improvements to future survey analyses.
{"title":"Validation of the Enhanced Opioid Identification and Co-occurring Disorders Algorithms.","authors":"Amy M Brown, Donielle G White, Nikki B Adams, Rihem Rihem PharmD, Salah Shaikh, Lello Guluma","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Objectives This report documents the results of a validation study conducted to assess the reliability of two algorithms applied to the 2016 National Hospital Care Survey. One algorithm identifies opioid-involved and opioid overdose hospital encounters, and the other identifies encounters with patients that have substance use disorders and selected mental health issues. These algorithms use both medical codes and natural language processing to identify encounters. Methods To validate the algorithms, medical record abstraction was performed on a stratified sample of 900 hospital encounters from the 2016 National Hospital Care Survey. The abstractors recorded their determinations of opioid involvement, opioid overdose, substance use disorder, and mental health issues on a standard form. Abstractors' determinations were compared with algorithm output to assess the overall performance using F-score and Matthews correlation coefficient. The latter provided a secondary measure of performance. The 2016 National Hospital Care Survey data are unweighted and not nationally representative. Results Overall algorithm performance varied by topic and by metric. The opioid-involvement algorithm achieved the highest performance, performing well with an F-score of 0.95, followed by the substance use disorder algorithm (F-score of 0.79), the mental health issues algorithm (F-score of 0.68), and the opioid overdose algorithm (F-score of 0.48). Assessment by Matthews correlation coefficient indicated an overall poorer level of performance, ranging from a high of 0.57 for the mental health issues algorithm to a low of 0.33 for the opioid-involvement algorithm. The causes of false positives and false negatives likewise varied, including both overly broad code and keyword inclusions as well as incompleteness of data submitted to the National Hospital Care Survey. Conclusion The validation study illustrates which aspects of the developed algorithms performed well and which aspects should be altered or discarded in future iterations. It further emphasizes the importance of data completeness, therefore laying the groundwork for improvements to future survey analyses.</p>","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":" 205","pages":"1-31"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139576825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li-Yen R Hu, Paul Scanlon, Kristen Miller, Yulei He, Katherine E Irimata, Guangyu Zhang, Kristen Cibelli Hibben
Objective This report on the third round of the Research and Development Survey (RANDS 3) provides a general description of RANDS 3 and presents percentage estimates of selected demographic and health-related variables from the overall sample and by one set of experimental groups embedded in the survey. Statistical tests comparing estimates for the two randomized groups were conducted to evaluate the randomization. Methods NORC at the University of Chicago conducted RANDS 3 for the National Center of Health Statistics in 2019 using its AmeriSpeak Panel in web-only mode. To assess question-response patterns, probe questions and four sets of experiments were embedded in RANDS 3, with panelists randomized into two groups for each set of experiments. Participants in each group received questions with differences in wording, question-andresponse formats, or question order. Results Of the 4,255 people sampled, 2,646 completed RANDS 3 for a completion rate of 62.2% and a weighted cumulative response rate of 18.1%. Iterative raking was performed using demographic and selected health condition variables to calibrate the RANDS 3 sample to 2019 National Health Interview Survey (NHIS) estimates. As a result, the overall demographic distribution and percentages of asthma, diabetes, hypertension, and high cholesterol for the calibrated RANDS 3 sample aligned with the percentages estimated from the 2019 NHIS. The distributions of demographic and healthrelated variables were comparable between the two randomized groups examined except for ever-diagnosed hypertension. Conclusion As part of a research series using probability-based survey panels, RANDS 3 included health-related questions with a focus on disability and opioids. Because RANDS is an ongoing research platform, a variety of persistent and emergent research questions relating to survey methodology will continue to be examined in current and future rounds of RANDS.
{"title":"National Center for Health Statistics' 2019 Research and Development Survey, RANDS 3.","authors":"Li-Yen R Hu, Paul Scanlon, Kristen Miller, Yulei He, Katherine E Irimata, Guangyu Zhang, Kristen Cibelli Hibben","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Objective This report on the third round of the Research and Development Survey (RANDS 3) provides a general description of RANDS 3 and presents percentage estimates of selected demographic and health-related variables from the overall sample and by one set of experimental groups embedded in the survey. Statistical tests comparing estimates for the two randomized groups were conducted to evaluate the randomization. Methods NORC at the University of Chicago conducted RANDS 3 for the National Center of Health Statistics in 2019 using its AmeriSpeak Panel in web-only mode. To assess question-response patterns, probe questions and four sets of experiments were embedded in RANDS 3, with panelists randomized into two groups for each set of experiments. Participants in each group received questions with differences in wording, question-andresponse formats, or question order. Results Of the 4,255 people sampled, 2,646 completed RANDS 3 for a completion rate of 62.2% and a weighted cumulative response rate of 18.1%. Iterative raking was performed using demographic and selected health condition variables to calibrate the RANDS 3 sample to 2019 National Health Interview Survey (NHIS) estimates. As a result, the overall demographic distribution and percentages of asthma, diabetes, hypertension, and high cholesterol for the calibrated RANDS 3 sample aligned with the percentages estimated from the 2019 NHIS. The distributions of demographic and healthrelated variables were comparable between the two randomized groups examined except for ever-diagnosed hypertension. Conclusion As part of a research series using probability-based survey panels, RANDS 3 included health-related questions with a focus on disability and opioids. Because RANDS is an ongoing research platform, a variety of persistent and emergent research questions relating to survey methodology will continue to be examined in current and future rounds of RANDS.</p>","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":" 65","pages":"1-55"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41157658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li-Yen R. Hu, P. Scanlon, Kristen Miller, Yulei He, Katherine E. Irimata, Guangyu Zhang, Kristen Cibelli Hibben
Objective This report on the third round of the Research and Development Survey (RANDS 3) provides a general description of RANDS 3 and presents percentage estimates of selected demographic and health-related variables from the overall sample and by one set of experimental groups embedded in the survey. Statistical tests comparing estimates for the two randomized groups were conducted to evaluate the randomization. Methods NORC at the University of Chicago conducted RANDS 3 for the National Center of Health Statistics in 2019 using its AmeriSpeak Panel in web-only mode. To assess question-response patterns, probe questions and four sets of experiments were embedded in RANDS 3, with panelists randomized into two groups for each set of experiments. Participants in each group received questions with differences in wording, question-andresponse formats, or question order. Results Of the 4,255 people sampled, 2,646 completed RANDS 3 for a completion rate of 62.2% and a weighted cumulative response rate of 18.1%. Iterative raking was performed using demographic and selected health condition variables to calibrate the RANDS 3 sample to 2019 National Health Interview Survey (NHIS) estimates. As a result, the overall demographic distribution and percentages of asthma, diabetes, hypertension, and high cholesterol for the calibrated RANDS 3 sample aligned with the percentages estimated from the 2019 NHIS. The distributions of demographic and healthrelated variables were comparable between the two randomized groups examined except for ever-diagnosed hypertension. Conclusion As part of a research series using probability-based survey panels, RANDS 3 included health-related questions with a focus on disability and opioids. Because RANDS is an ongoing research platform, a variety of persistent and emergent research questions relating to survey methodology will continue to be examined in current and future rounds of RANDS.
{"title":"National Center for Health Statistics' 2019 Research and Development Survey, RANDS 3.","authors":"Li-Yen R. Hu, P. Scanlon, Kristen Miller, Yulei He, Katherine E. Irimata, Guangyu Zhang, Kristen Cibelli Hibben","doi":"10.15620/cdc:130273","DOIUrl":"https://doi.org/10.15620/cdc:130273","url":null,"abstract":"Objective This report on the third round of the Research and Development Survey (RANDS 3) provides a general description of RANDS 3 and presents percentage estimates of selected demographic and health-related variables from the overall sample and by one set of experimental groups embedded in the survey. Statistical tests comparing estimates for the two randomized groups were conducted to evaluate the randomization. Methods NORC at the University of Chicago conducted RANDS 3 for the National Center of Health Statistics in 2019 using its AmeriSpeak Panel in web-only mode. To assess question-response patterns, probe questions and four sets of experiments were embedded in RANDS 3, with panelists randomized into two groups for each set of experiments. Participants in each group received questions with differences in wording, question-andresponse formats, or question order. Results Of the 4,255 people sampled, 2,646 completed RANDS 3 for a completion rate of 62.2% and a weighted cumulative response rate of 18.1%. Iterative raking was performed using demographic and selected health condition variables to calibrate the RANDS 3 sample to 2019 National Health Interview Survey (NHIS) estimates. As a result, the overall demographic distribution and percentages of asthma, diabetes, hypertension, and high cholesterol for the calibrated RANDS 3 sample aligned with the percentages estimated from the 2019 NHIS. The distributions of demographic and healthrelated variables were comparable between the two randomized groups examined except for ever-diagnosed hypertension. Conclusion As part of a research series using probability-based survey panels, RANDS 3 included health-related questions with a focus on disability and opioids. Because RANDS is an ongoing research platform, a variety of persistent and emergent research questions relating to survey methodology will continue to be examined in current and future rounds of RANDS.","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":"65 1","pages":"1-55"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48666018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This report outlines the methodology, development, and fielding of the 2021 Physician Pain Management Questionnaire (PPMQ) pilot study. The study was conducted by the National Center for Health Statistics and was designed to test the feasibility of a large, nationally representative survey assessing physician awareness and use of established guidelines for prescribing opioids to manage pain.
{"title":"The 2021 Physician Pain Management Questionnaire Pilot Study.","authors":"Doreen M Gidali, Brian W Ward","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This report outlines the methodology, development, and fielding of the 2021 Physician Pain Management Questionnaire (PPMQ) pilot study. The study was conducted by the National Center for Health Statistics and was designed to test the feasibility of a large, nationally representative survey assessing physician awareness and use of established guidelines for prescribing opioids to manage pain.</p>","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":" 204","pages":"1-45"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41165193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sonja N Williams, Joy Ukaigwe, Brian W Ward, Titilayo Okeyode, Iris M Shimizu
As part of modernization efforts, in 2021 the National Ambulatory Medical Care Survey (NAMCS) began collecting electronic health records (EHRs) for ambulatory care visits in its Health Center (HC) Component. As a result, the National Center for Health Statistics (NCHS)needed to adjust the approaches used in the sampling design for the HC Component. This report provides details on these changes to the 2021-2022 NAMCS.
{"title":"Sampling Procedures for the Collection of Electronic Health Record Data From Federally Qualified Health Centers, 2021-2022 National Ambulatory Medical Care Survey.","authors":"Sonja N Williams, Joy Ukaigwe, Brian W Ward, Titilayo Okeyode, Iris M Shimizu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>As part of modernization efforts, in 2021 the National Ambulatory Medical Care Survey (NAMCS) began collecting electronic health records (EHRs) for ambulatory care visits in its Health Center (HC) Component. As a result, the National Center for Health Statistics (NCHS)needed to adjust the approaches used in the sampling design for the HC Component. This report provides details on these changes to the 2021-2022 NAMCS.</p>","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":" 203","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9726955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katherine E. Irimata, Yulei He, V. Parsons, Hee-Choon Shin, Guangyu Zhang
Objectives The Research and Development Survey (RANDS) is a series of web-based, commercial panel surveys that have been conducted by the National Center for Health Statistics (NCHS) since 2015. RANDS was designed for methodological research purposes,including supplementing NCHS' evaluation of surveys and questionnaires to detect measurement error, and exploring methods to integrate data from commercial survey panels with high-quality data collections to improve survey estimation. The latter goal of improving survey estimation is in response to limitations of web surveys, including coverage and nonresponse bias. To address the potential bias in estimates from RANDS,NCHS has investigated various calibration weighting methods to adjust the RANDS panel weights using one of NCHS' national household surveys, the National Health Interview Survey. This report describes calibration weighting methods and the approaches used to calibrate weights in web-based panel surveys at NCHS.
{"title":"Calibration Weighting Methods for the National Center for Health Statistics Research and Development Survey.","authors":"Katherine E. Irimata, Yulei He, V. Parsons, Hee-Choon Shin, Guangyu Zhang","doi":"10.15620/cdc:123463","DOIUrl":"https://doi.org/10.15620/cdc:123463","url":null,"abstract":"Objectives The Research and Development Survey (RANDS) is a series of web-based, commercial panel surveys that have been conducted by the National Center for Health Statistics (NCHS) since 2015. RANDS was designed for methodological research purposes,including supplementing NCHS' evaluation of surveys and questionnaires to detect measurement error, and exploring methods to integrate data from commercial survey panels with high-quality data collections to improve survey estimation. The latter goal of improving survey estimation is in response to limitations of web surveys, including coverage and nonresponse bias. To address the potential bias in estimates from RANDS,NCHS has investigated various calibration weighting methods to adjust the RANDS panel weights using one of NCHS' national household surveys, the National Health Interview Survey. This report describes calibration weighting methods and the approaches used to calibrate weights in web-based panel surveys at NCHS.","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":"87 1","pages":"1-23"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47915477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Talih, Katherine E. Irimata, Guangyu Zhang, J. Parker
For the CIs used in the Standards for rates from vital statistics and complex health surveys, this report evaluates coverage probability, relative width, and the resulting percentage of rates flagged as statistically unreliable when compared with previously used standards. Additionally, the report assesses the impact of design effects and the denominator's sampling variability, when applicable.
{"title":"Evaluation of the National Center for Health Statistics Data Presentation Standards for Rates From Vital Statistics and Sample Surveys.","authors":"M. Talih, Katherine E. Irimata, Guangyu Zhang, J. Parker","doi":"10.15620/cdc:123462","DOIUrl":"https://doi.org/10.15620/cdc:123462","url":null,"abstract":"For the CIs used in the Standards for rates from vital statistics and complex health surveys, this report evaluates coverage probability, relative width, and the resulting percentage of rates flagged as statistically unreliable when compared with previously used standards. Additionally, the report assesses the impact of design effects and the denominator's sampling variability, when applicable.","PeriodicalId":38828,"journal":{"name":"Vital and health statistics. Ser. 1: Programs and collection procedures","volume":"198 1","pages":"1-30"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44763698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}