Xiaohan Wang, Jiayi Tong, Sida Peng, Yong Chen, Yang Ning
We propose a communication‐efficient algorithm to estimate the average treatment effect (ATE), when the data are distributed across multiple sites and the number of covariates is possibly much larger than the sample size in each site. Our main idea is to calibrate the estimates of the propensity score and outcome models using some proper surrogate loss functions to approximately attain the desired covariate balancing property. We show that under possible model misspecification, our distributed covariate balancing propensity score estimator (disthdCBPS) can approximate the global estimator, obtained by pooling together the data from multiple sites, at a fast rate. Thus, our estimator remains consistent and asymptotically normal. In addition, when both the propensity score and the outcome models are correctly specified, the proposed estimator attains the semi‐parametric efficiency bound. We illustrate the empirical performance of the proposed method in both simulation and empirical studies.
{"title":"Communication‐Efficient Distributed Estimation of Causal Effects With High‐Dimensional Data","authors":"Xiaohan Wang, Jiayi Tong, Sida Peng, Yong Chen, Yang Ning","doi":"10.1002/sta4.70006","DOIUrl":"https://doi.org/10.1002/sta4.70006","url":null,"abstract":"We propose a communication‐efficient algorithm to estimate the average treatment effect (ATE), when the data are distributed across multiple sites and the number of covariates is possibly much larger than the sample size in each site. Our main idea is to calibrate the estimates of the propensity score and outcome models using some proper surrogate loss functions to approximately attain the desired covariate balancing property. We show that under possible model misspecification, our distributed covariate balancing propensity score estimator (disthdCBPS) can approximate the global estimator, obtained by pooling together the data from multiple sites, at a fast rate. Thus, our estimator remains consistent and asymptotically normal. In addition, when both the propensity score and the outcome models are correctly specified, the proposed estimator attains the semi‐parametric efficiency bound. We illustrate the empirical performance of the proposed method in both simulation and empirical studies.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mariana Carmona‐Baez, Alexandra M. Schmidt, Shirin Golchi, David Buckeridge
Infectious respiratory diseases have been of interest in recent years for the great burden they place on health systems, for instance, the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) that caused the global COVID‐19 pandemic. As many of these diseases might require hospitalization and even intensive care unit (ICU) admission, understanding the joint dynamics of hospitalizations and ICU admissions across time and different groups of the population remains of great importance. We aim to understand the joint evolution of hospital and ICU admissions given COVID‐19 test‐positive cases in the province of Quebec, Canada. We obtain the daily counts, by age group, on the number of confirmed COVID‐19 cases, the number of hospitalizations and the number of ICU admissions due to COVID‐19, from March 2020 through October 2021 in Quebec. We propose a joint Bayesian generalized dynamic linear model for the number of hospitalizations and ICU admissions to study their temporal trends and possible associations with sex and age group. Additionally, we use transfer functions to investigate if there is a memory effect of the number of cases on hospitalizations across the different age groups. The results suggest that there is a clear distinction in the patterns of hospitalizations and ICU admissions across age groups and that the number of cases has a persistent effect on the rate of hospitalization.
{"title":"A Joint Temporal Model for Hospitalizations and ICU Admissions Due to COVID‐19 in Quebec","authors":"Mariana Carmona‐Baez, Alexandra M. Schmidt, Shirin Golchi, David Buckeridge","doi":"10.1002/sta4.70000","DOIUrl":"https://doi.org/10.1002/sta4.70000","url":null,"abstract":"Infectious respiratory diseases have been of interest in recent years for the great burden they place on health systems, for instance, the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) that caused the global COVID‐19 pandemic. As many of these diseases might require hospitalization and even intensive care unit (ICU) admission, understanding the joint dynamics of hospitalizations and ICU admissions across time and different groups of the population remains of great importance. We aim to understand the joint evolution of hospital and ICU admissions given COVID‐19 test‐positive cases in the province of Quebec, Canada. We obtain the daily counts, by age group, on the number of confirmed COVID‐19 cases, the number of hospitalizations and the number of ICU admissions due to COVID‐19, from March 2020 through October 2021 in Quebec. We propose a joint Bayesian generalized dynamic linear model for the number of hospitalizations and ICU admissions to study their temporal trends and possible associations with sex and age group. Additionally, we use transfer functions to investigate if there is a memory effect of the number of cases on hospitalizations across the different age groups. The results suggest that there is a clear distinction in the patterns of hospitalizations and ICU admissions across age groups and that the number of cases has a persistent effect on the rate of hospitalization.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"61 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bitcoin, being one of the most triumphant cryptocurrencies, is gaining increasing popularity online and is being used in a variety of transactions. Recently, research on Bitcoin price predictions is receiving more attention, and researchers have investigated the various state‐of‐the‐art machine learning (ML) and deep learning (DL) models to predict Bitcoin price. However, despite these models providing promising predictions, they consistently exhibit uncertainty, which cannot be adequately quantified by classical ML models alone. Motivated by the enormous success of applying Bayesian approaches in several disciplines of ML and DL, this study aims to use Bayesian methods alongside Long Short‐Term Memory (LSTM) to predict the closing Bitcoin price and consequently measure the uncertainty of the prediction model. Specifically, we adopted the Monte Carlo dropout (MC‐Dropout) method with the Bayesian LSTM model to quantify the epistemic uncertainty of the model's predictions and provided confidence intervals for the predicted outputs. Experimental results showed that the proposed model is efficient and outperforms other state‐of‐the‐art models in terms of root mean square error (RMSE), mean absolute error (MAE) and R2. Thus, we believe that these models may assist the investors and traders in making critical decisions based on short‐term predictions of Bitcoin price. This study illustrates the potential benefits of utilizing Bayesian DL approaches in time series analysis to improve data prediction accuracy and reliability.
比特币作为最成功的加密货币之一,在网上越来越受欢迎,并被用于各种交易。最近,有关比特币价格预测的研究受到越来越多的关注,研究人员研究了各种最先进的机器学习(ML)和深度学习(DL)模型来预测比特币价格。然而,尽管这些模型提供了有前景的预测,但它们始终表现出不确定性,而这种不确定性仅靠经典的 ML 模型是无法充分量化的。贝叶斯方法在多个 ML 和 DL 学科中的应用取得了巨大成功,受此激励,本研究旨在使用贝叶斯方法和长短期记忆(LSTM)来预测比特币收盘价格,从而测量预测模型的不确定性。具体而言,我们采用蒙特卡罗剔除(MC-Dropout)方法与贝叶斯 LSTM 模型相结合,量化模型预测的认识不确定性,并提供预测输出的置信区间。实验结果表明,所提出的模型是高效的,在均方根误差(RMSE)、平均绝对误差(MAE)和 R2 方面都优于其他最先进的模型。因此,我们相信这些模型可以帮助投资者和交易者根据比特币价格的短期预测做出关键决策。本研究说明了在时间序列分析中利用贝叶斯 DL 方法提高数据预测准确性和可靠性的潜在好处。
{"title":"Bitcoin Price Prediction Using Deep Bayesian LSTM With Uncertainty Quantification: A Monte Carlo Dropout–Based Approach","authors":"Masoud Muhammed Hassan","doi":"10.1002/sta4.70001","DOIUrl":"https://doi.org/10.1002/sta4.70001","url":null,"abstract":"Bitcoin, being one of the most triumphant cryptocurrencies, is gaining increasing popularity online and is being used in a variety of transactions. Recently, research on Bitcoin price predictions is receiving more attention, and researchers have investigated the various state‐of‐the‐art machine learning (ML) and deep learning (DL) models to predict Bitcoin price. However, despite these models providing promising predictions, they consistently exhibit uncertainty, which cannot be adequately quantified by classical ML models alone. Motivated by the enormous success of applying Bayesian approaches in several disciplines of ML and DL, this study aims to use Bayesian methods alongside Long Short‐Term Memory (LSTM) to predict the closing Bitcoin price and consequently measure the uncertainty of the prediction model. Specifically, we adopted the Monte Carlo dropout (MC‐Dropout) method with the Bayesian LSTM model to quantify the epistemic uncertainty of the model's predictions and provided confidence intervals for the predicted outputs. Experimental results showed that the proposed model is efficient and outperforms other state‐of‐the‐art models in terms of root mean square error (RMSE), mean absolute error (MAE) and <jats:italic>R</jats:italic><jats:sup>2</jats:sup>. Thus, we believe that these models may assist the investors and traders in making critical decisions based on short‐term predictions of Bitcoin price. This study illustrates the potential benefits of utilizing Bayesian DL approaches in time series analysis to improve data prediction accuracy and reliability.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"3 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose and investigate closed‐form point estimators for a weighted exponential family. We also develop a bias‐reduced version of these proposed closed‐form estimators through bootstrap methods. Estimators are assessed using a Monte Carlo simulation, revealing favourable results for the proposed bootstrap bias‐reduced estimators. We illustrate the proposed methodology with the use of two real data sets.
{"title":"Novel Closed‐Form Point Estimators for a Weighted Exponential Family Derived From Likelihood Equations","authors":"Roberto Vila, Eduardo Nakano, Helton Saulo","doi":"10.1002/sta4.723","DOIUrl":"https://doi.org/10.1002/sta4.723","url":null,"abstract":"In this paper, we propose and investigate closed‐form point estimators for a weighted exponential family. We also develop a bias‐reduced version of these proposed closed‐form estimators through bootstrap methods. Estimators are assessed using a Monte Carlo simulation, revealing favourable results for the proposed bootstrap bias‐reduced estimators. We illustrate the proposed methodology with the use of two real data sets.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"60 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryBinary data subject to one type of misclassification exist in various fields. It is collected in a double‐sampling scheme that includes a gold standard test and a fallible test. The main parameter of interest for this type of data is the positive probability of the gold standard test. Existing intervals are unreliable because the given nominal level is not achieved. In this paper, we construct an exact interval by inverting the E+M score tests and improve it by the general ‐function method. We find that the total length of the improved interval is shorter than the exact intervals that are also the improved intervals when we apply the ‐function to several existing approximate intervals, including the score and Bayesian intervals. Therefore, it is recommended for practice. We are also interested in two other parameters: —the difference between the two positive rates for the fallible and gold standard tests—and —the false positive rate for the fallible test. To the best of our knowledge, the research on these two parameters is limited. For , we find that any interval for can be converted to an interval for . So, the interval converted from the aforementioned recommended interval for is recommended for inferring . For , the improved interval by the ‐function method over the E+M score interval is derived. We use an example to illustrate how the intervals are computed and provide a real data analysis.
{"title":"Exact interval estimation for three parameters subject to false positive misclassification","authors":"Shuiyun Lu, Weizhen Wang, Tianfa Xie","doi":"10.1002/sta4.717","DOIUrl":"https://doi.org/10.1002/sta4.717","url":null,"abstract":"SummaryBinary data subject to one type of misclassification exist in various fields. It is collected in a double‐sampling scheme that includes a gold standard test and a fallible test. The main parameter of interest for this type of data is the positive probability of the gold standard test. Existing intervals are unreliable because the given nominal level is not achieved. In this paper, we construct an exact interval by inverting the E+M score tests and improve it by the general ‐function method. We find that the total length of the improved interval is shorter than the exact intervals that are also the improved intervals when we apply the ‐function to several existing approximate intervals, including the score and Bayesian intervals. Therefore, it is recommended for practice. We are also interested in two other parameters: —the difference between the two positive rates for the fallible and gold standard tests—and —the false positive rate for the fallible test. To the best of our knowledge, the research on these two parameters is limited. For , we find that any interval for can be converted to an interval for . So, the interval converted from the aforementioned recommended interval for is recommended for inferring . For , the improved interval by the ‐function method over the E+M score interval is derived. We use an example to illustrate how the intervals are computed and provide a real data analysis.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"10 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The nearest shrunken centroids (NSC) method is an efficient and accurate classifier. However, it is incapable of modelling correlation among predictors. Moreover, many contemporary datasets have tensor predictors that cannot be directly handled by NSC. We tackle these challenges by proposing a new distance‐based classifier, tensor decorrelated NSC (TDNSC). TDNSC leverages the popular separable covariance structure on tensor data to decorrelate data and allow easy application of NSC afterwards. Unlike existing tensor classifiers that often rely on complicated iterative algorithms, TDNSC has analytical solutions. The theoretical properties and empirical results suggest that TDNSC is a promising method for tensor classification.
{"title":"Decorrelated nearest shrunken centroids for tensor data","authors":"Shaokang Ren, Munwon Yang, Qing Mai","doi":"10.1002/sta4.720","DOIUrl":"https://doi.org/10.1002/sta4.720","url":null,"abstract":"The nearest shrunken centroids (NSC) method is an efficient and accurate classifier. However, it is incapable of modelling correlation among predictors. Moreover, many contemporary datasets have tensor predictors that cannot be directly handled by NSC. We tackle these challenges by proposing a new distance‐based classifier, tensor decorrelated NSC (TDNSC). TDNSC leverages the popular separable covariance structure on tensor data to decorrelate data and allow easy application of NSC afterwards. Unlike existing tensor classifiers that often rely on complicated iterative algorithms, TDNSC has analytical solutions. The theoretical properties and empirical results suggest that TDNSC is a promising method for tensor classification.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"13 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies a novel time‐varying coefficient integer‐valued time series model driven by observation. The model is suitable for modeling negative integer‐valued time series based on the Poisson difference distribution and extended binomial thinning operator. Main methods used to estimate the parameters are the conditional least squares (CLS) and conditional maximum likelihood (CML) methods. This paper also discusses the consistency and asymptotic normality of the estimation results. Likelihood ratio tests are employed to examine the existence of covariate and observation. Numerical simulations are conducted to verify the accuracy and stability of the model. Finally, a real data application is presented to demonstrate the usefulness and adaptability of this newly proposed model.
{"title":"A novel time‐varying coefficient Poisson difference model driven by observation","authors":"Ye Liu, Dehui Wang","doi":"10.1002/sta4.721","DOIUrl":"https://doi.org/10.1002/sta4.721","url":null,"abstract":"This paper studies a novel time‐varying coefficient integer‐valued time series model driven by observation. The model is suitable for modeling negative integer‐valued time series based on the Poisson difference distribution and extended binomial thinning operator. Main methods used to estimate the parameters are the conditional least squares (CLS) and conditional maximum likelihood (CML) methods. This paper also discusses the consistency and asymptotic normality of the estimation results. Likelihood ratio tests are employed to examine the existence of covariate and observation. Numerical simulations are conducted to verify the accuracy and stability of the model. Finally, a real data application is presented to demonstrate the usefulness and adaptability of this newly proposed model.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"9 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryThe dynamic ranking, due to its increasing importance in many applications, is becoming crucial, especially with the collection of voluminous time‐dependent data. One such application is sports statistics, where dynamic ranking aids in forecasting the performance of competitive teams, drawing on historical and current data. Despite its usefulness, predicting and inferring rankings pose challenges in environments necessitating time‐dependent modelling. This paper introduces a spectral ranker called Kernel Rank Centrality, designed to rank items based on pairwise comparisons over time. The ranker operates via kernel smoothing in the Bradley–Terry model, utilising a Markov chain model. Unlike the maximum likelihood approach, the spectral ranker is nonparametric, demands fewer model assumptions and computations and allows for real‐time ranking. We establish the asymptotic distribution of the ranker by applying an innovative group inverse technique, resulting in a uniform and precise entrywise expansion. This result allows us to devise a new inferential method for predictive inference, previously unavailable in existing approaches. Our numerical examples showcase the ranker's utility in predictive accuracy and constructing an uncertainty measure for prediction, leveraging data from the National Basketball Association (NBA). The results underscore our method's potential compared with the gold standard in sports, the Arpad Elo rating system.
{"title":"A spectral approach for the dynamic Bradley–Terry model","authors":"Xinyu Tian, Jian Shi, Xiaotong Shen, Kai Song","doi":"10.1002/sta4.722","DOIUrl":"https://doi.org/10.1002/sta4.722","url":null,"abstract":"SummaryThe dynamic ranking, due to its increasing importance in many applications, is becoming crucial, especially with the collection of voluminous time‐dependent data. One such application is sports statistics, where dynamic ranking aids in forecasting the performance of competitive teams, drawing on historical and current data. Despite its usefulness, predicting and inferring rankings pose challenges in environments necessitating time‐dependent modelling. This paper introduces a spectral ranker called Kernel Rank Centrality, designed to rank items based on pairwise comparisons over time. The ranker operates via kernel smoothing in the Bradley–Terry model, utilising a Markov chain model. Unlike the maximum likelihood approach, the spectral ranker is nonparametric, demands fewer model assumptions and computations and allows for real‐time ranking. We establish the asymptotic distribution of the ranker by applying an innovative group inverse technique, resulting in a uniform and precise entrywise expansion. This result allows us to devise a new inferential method for predictive inference, previously unavailable in existing approaches. Our numerical examples showcase the ranker's utility in predictive accuracy and constructing an uncertainty measure for prediction, leveraging data from the National Basketball Association (NBA). The results underscore our method's potential compared with the gold standard in sports, the Arpad Elo rating system.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"14 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a new copula model for non‐stationary replicated spatial data. It is based on the assumption that a common factor exists that controls the joint dependence of all the observations from the spatial process. As a result, our proposal can model tail dependence and tail asymmetry, unlike the Gaussian copula model. Moreover, we show that the new model can cover a full range of dependence between tail quadrant independence and tail dependence. Although the log‐likelihood of the model can be obtained in a simple form, we discuss its numerical computational issues and ways to approximate it for drawing inference. Using the estimated copula model, the spatial process can be interpolated at locations where it is not observed. We apply the proposed model to temperature data over the western part of Switzerland, and we compare its performance with that of its stationary version and with the Gaussian copula model.
{"title":"A non‐stationary factor copula model for non‐Gaussian spatial data","authors":"Sagnik Mondal, Pavel Krupskii, Marc G. Genton","doi":"10.1002/sta4.715","DOIUrl":"https://doi.org/10.1002/sta4.715","url":null,"abstract":"We introduce a new copula model for non‐stationary replicated spatial data. It is based on the assumption that a common factor exists that controls the joint dependence of all the observations from the spatial process. As a result, our proposal can model tail dependence and tail asymmetry, unlike the Gaussian copula model. Moreover, we show that the new model can cover a full range of dependence between tail quadrant independence and tail dependence. Although the log‐likelihood of the model can be obtained in a simple form, we discuss its numerical computational issues and ways to approximate it for drawing inference. Using the estimated copula model, the spatial process can be interpolated at locations where it is not observed. We apply the proposed model to temperature data over the western part of Switzerland, and we compare its performance with that of its stationary version and with the Gaussian copula model.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"26 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Margaret R. Stedman, Salem Dehom, Mario A. Davidson, Li Zhang, Robert H. Podolsky, Ryan T. Pohlig, Todd Coffey
Members of the ASA's Section on Statistical Consulting established the Pathways to Promotion Committee in 2021 to provide guidance and support for navigating a career as a collaborative statistician. In three years of existence, the Committee has produced seven webinars on relevant topics, each attended by more than one hundred participants. Committee members have given four oral presentations, organized three roundtables, led multiple discussions at ASA meetings and published four articles. These efforts have inspired, created and facilitated new connections for collaborative statisticians who feel isolated in their career path. This paper describes the formation and development of the Committee, reports its impact on the community of collaborative statisticians and discusses potential future directions.
{"title":"Empowering collaborative statisticians: The impact of the American Statistical Association's Pathways to Promotion Committee","authors":"Margaret R. Stedman, Salem Dehom, Mario A. Davidson, Li Zhang, Robert H. Podolsky, Ryan T. Pohlig, Todd Coffey","doi":"10.1002/sta4.716","DOIUrl":"https://doi.org/10.1002/sta4.716","url":null,"abstract":"Members of the ASA's Section on Statistical Consulting established the Pathways to Promotion Committee in 2021 to provide guidance and support for navigating a career as a collaborative statistician. In three years of existence, the Committee has produced seven webinars on relevant topics, each attended by more than one hundred participants. Committee members have given four oral presentations, organized three roundtables, led multiple discussions at ASA meetings and published four articles. These efforts have inspired, created and facilitated new connections for collaborative statisticians who feel isolated in their career path. This paper describes the formation and development of the Committee, reports its impact on the community of collaborative statisticians and discusses potential future directions.","PeriodicalId":56159,"journal":{"name":"Stat","volume":"19 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}