首页 > 最新文献

The American Statistician最新文献

英文 中文
Here Comes the STRAIN: Analyzing Defensive Pass Rush in American Football with Player Tracking Data 压力来了:用球员跟踪数据分析美式足球中的防守传球
Pub Date : 2023-05-17 DOI: 10.1080/00031305.2023.2242442
Quang Nguyen, Ronald Yurko, Gregory J. Matthews
In American football, a pass rush is an attempt by the defensive team to disrupt the offense and prevent the quarterback (QB) from completing a pass. Existing metrics for assessing pass rush performance are either discrete-time quantities or based on subjective judgment. Using player tracking data, we propose STRAIN, a novel metric for evaluating pass rushers in the National Football League (NFL) at the continuous-time within-play level. Inspired by the concept of strain rate in materials science, STRAIN is a simple and interpretable means for measuring defensive pressure in football. It is a directly-observed statistic as a function of two features: the distance between the pass rusher and QB, and the rate at which this distance is being reduced. Our metric possesses great predictability of pressure and stability over time. We also fit a multilevel model for STRAIN to understand the defensive pressure contribution of every pass rusher at the play-level. We apply our approach to NFL data and present results for the first eight weeks of the 2021 regular season. In particular, we provide comparisons of STRAIN for different defensive positions and play outcomes, and rankings of the NFL's best pass rushers according to our metric.
在美式足球中,传球冲刺是防守队试图扰乱进攻,阻止四分卫(QB)完成传球。现有的评估过路性能的指标要么是离散时间量,要么是基于主观判断。利用球员跟踪数据,我们提出了STRAIN,这是一个新的指标,用于评估国家橄榄球联盟(NFL)在连续时间内的比赛水平上的传球运动员。受到材料科学中应变率概念的启发,strain是一种简单且可解释的测量足球中防守压力的方法。它是一个直接观察到的统计数据,作为两个特征的函数:传球者和四分卫之间的距离,以及这个距离减少的速度。随着时间的推移,我们的度量具有很大的压力和稳定性的可预测性。我们还为STRAIN拟合了一个多层模型,以了解每个传球运动员在比赛水平上对防守压力的贡献。我们将我们的方法应用于NFL数据和2021年常规赛前八周的结果。特别是,我们提供了不同防守位置和比赛结果的STRAIN比较,以及根据我们的指标对NFL最佳传球手的排名。
{"title":"Here Comes the STRAIN: Analyzing Defensive Pass Rush in American Football with Player Tracking Data","authors":"Quang Nguyen, Ronald Yurko, Gregory J. Matthews","doi":"10.1080/00031305.2023.2242442","DOIUrl":"https://doi.org/10.1080/00031305.2023.2242442","url":null,"abstract":"In American football, a pass rush is an attempt by the defensive team to disrupt the offense and prevent the quarterback (QB) from completing a pass. Existing metrics for assessing pass rush performance are either discrete-time quantities or based on subjective judgment. Using player tracking data, we propose STRAIN, a novel metric for evaluating pass rushers in the National Football League (NFL) at the continuous-time within-play level. Inspired by the concept of strain rate in materials science, STRAIN is a simple and interpretable means for measuring defensive pressure in football. It is a directly-observed statistic as a function of two features: the distance between the pass rusher and QB, and the rate at which this distance is being reduced. Our metric possesses great predictability of pressure and stability over time. We also fit a multilevel model for STRAIN to understand the defensive pressure contribution of every pass rusher at the play-level. We apply our approach to NFL data and present results for the first eight weeks of the 2021 regular season. In particular, we provide comparisons of STRAIN for different defensive positions and play outcomes, and rankings of the NFL's best pass rushers according to our metric.","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129050616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Response to Comment by Schilling 席林对评论的回应
Pub Date : 2023-04-20 DOI: 10.1080/00031305.2023.2205455
J. Bartroff, G. Lorden, Lijia Wang
We appreciate the recent paper of Schilling and Stanley (2022, hereafter SS) on confidence intervals for the hypergeometric being brought to our attention, which we were not aware of while preparing our paper (Bartroff, Lorden, and Wang 2022, hereafter BLW) on that subject. Although there are commonalities between the two approaches, there are some important distinctions that we highlight here. Following those papers’ notations, below we denote the confidence intervals for the hypergeometric success parameter based on sample size n and population size N by LCO for SS, and C∗ for BLW. In the numerical examples below, LCO (github.com/mfschilling/ HGCIs) and C∗ (github.com/bartroff792/hyper) were computed using the respective authors’ publicly available R code, running on the same computer. Computational time. LCO and C∗ differ drastically in the amount of time required to compute them. Figure 1 shows the computational time of LCO and C∗ for α = 0.05, N = 200, 400, . . . , 1000, and n = N/2. For example, for N = 1000 the computational time of LCO exceeds 100 min whereas C∗ requires roughly 1/10th of a second (0.002 min). In further numerical comparisons not included here, we found this relationship to be common for moderate to large values of the sample and population sizes, n and N. This may be due to the algorithm for computing LCO which calls for searching among all acceptance functions of minimal span (SS, p. 37). Provable optimality. SS contains two proofs, one in the Appendix of a basic result about the hypergeometric parameters, and one in the main text of the paper’s only theorem (SS, p. 33) which is a general result that size-optimal hypergeometric acceptance sets are inverted to yield size-optimal confidence “intervals.” However, not all inverted acceptance sets will yield proper intervals, and in practice one often ends up with noninterval confidence sets, for example, intervals with “gaps.” This occurs when the endpoint sequences of the acceptance intervals being inverted are non-monotonic, or themselves have gaps. SS address this by modifying their proposal in this situation to mimic a method of Schilling and Doi (2014) developed for the Binomial distribution. SS (pp. 36–37) write, Where there is a need to resolve a gap, in which case the minimal span acceptance function that causes the gap is replaced with the one having the
我们感谢最近Schilling和Stanley(2022,下文简称SS)关于超几何置信区间的论文引起了我们的注意,这是我们在准备关于该主题的论文(Bartroff, Lorden, and Wang 2022,下文简称BLW)时没有意识到的。尽管这两种方法之间存在共性,但我们在这里强调一些重要的区别。根据这些论文的注释,下面我们用LCO表示基于样本大小n和总体大小n的超几何成功参数的置信区间,LCO表示SS, C *表示BLW。在下面的数值示例中,LCO (github.com/mfschilling/ HGCIs)和C * (github.com/bartroff792/hyper)是使用各自作者在同一台计算机上运行的公开可用的R代码计算的。计算时间。LCO和C *在计算它们所需的时间上差别很大。图1显示了当α = 0.05, N = 200,400,…时LCO和C *的计算时间。, 1000, n = n /2。例如,当N = 1000时,LCO的计算时间超过100分钟,而C *大约需要1/10秒(0.002分钟)。在本文未包括的进一步数值比较中,我们发现这种关系对于样本和总体大小(n和n)的中大值是常见的。这可能是由于计算LCO的算法要求在最小跨度的所有可接受函数中进行搜索(SS,第37页)。可证明的最优。SS包含两个证明,一个在关于超几何参数的一个基本结果的附录中,另一个在本文唯一定理(SS,第33页)的正文中,该定理是大小最优的超几何可接受集被反转以产生大小最优的置信“区间”的一般结果。然而,并不是所有的反向接受集都会产生合适的区间,在实践中,人们经常会得到非区间置信集,例如,具有“间隙”的区间。当被反转的接受区间的端点序列是非单调的,或者其本身有间隙时,就会发生这种情况。SS通过修改他们在这种情况下的建议来解决这个问题,以模仿Schilling和Doi(2014)为二项分布开发的方法。SS(第36-37页)写道,当需要解决缺口时,在这种情况下,导致缺口的最小跨度接受函数被具有
{"title":"Response to Comment by Schilling","authors":"J. Bartroff, G. Lorden, Lijia Wang","doi":"10.1080/00031305.2023.2205455","DOIUrl":"https://doi.org/10.1080/00031305.2023.2205455","url":null,"abstract":"We appreciate the recent paper of Schilling and Stanley (2022, hereafter SS) on confidence intervals for the hypergeometric being brought to our attention, which we were not aware of while preparing our paper (Bartroff, Lorden, and Wang 2022, hereafter BLW) on that subject. Although there are commonalities between the two approaches, there are some important distinctions that we highlight here. Following those papers’ notations, below we denote the confidence intervals for the hypergeometric success parameter based on sample size n and population size N by LCO for SS, and C∗ for BLW. In the numerical examples below, LCO (github.com/mfschilling/ HGCIs) and C∗ (github.com/bartroff792/hyper) were computed using the respective authors’ publicly available R code, running on the same computer. Computational time. LCO and C∗ differ drastically in the amount of time required to compute them. Figure 1 shows the computational time of LCO and C∗ for α = 0.05, N = 200, 400, . . . , 1000, and n = N/2. For example, for N = 1000 the computational time of LCO exceeds 100 min whereas C∗ requires roughly 1/10th of a second (0.002 min). In further numerical comparisons not included here, we found this relationship to be common for moderate to large values of the sample and population sizes, n and N. This may be due to the algorithm for computing LCO which calls for searching among all acceptance functions of minimal span (SS, p. 37). Provable optimality. SS contains two proofs, one in the Appendix of a basic result about the hypergeometric parameters, and one in the main text of the paper’s only theorem (SS, p. 33) which is a general result that size-optimal hypergeometric acceptance sets are inverted to yield size-optimal confidence “intervals.” However, not all inverted acceptance sets will yield proper intervals, and in practice one often ends up with noninterval confidence sets, for example, intervals with “gaps.” This occurs when the endpoint sequences of the acceptance intervals being inverted are non-monotonic, or themselves have gaps. SS address this by modifying their proposal in this situation to mimic a method of Schilling and Doi (2014) developed for the Binomial distribution. SS (pp. 36–37) write, Where there is a need to resolve a gap, in which case the minimal span acceptance function that causes the gap is replaced with the one having the","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117097802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Spatio-Temporal Change-Point Detection 分层时空变化点检测
Pub Date : 2023-04-11 DOI: 10.1080/00031305.2023.2191670
Abstract Detecting change-points in multivariate settings is usually carried out by analyzing all marginals either independently, via univariate methods, or jointly, through multivariate approaches. The former discards any inherent dependencies between different marginals and the latter may suffer from domination/masking among different change-points of distinct marginals. As a remedy, we propose an approach which groups marginals with similar temporal behaviors, and then performs group-wise multivariate change-point detection. Our approach groups marginals based on hierarchical clustering using distances which adjust for inherent dependencies. Through a simulation study we show that our approach, by preventing domination/masking, significantly enhances the general performance of the employed multivariate change-point detection method. Finally, we apply our approach to two datasets: (i) Land Surface Temperature in Spain, during the years 2000–2021, and (ii) The WikiLeaks Afghan War Diary data.
{"title":"Hierarchical Spatio-Temporal Change-Point Detection","authors":"","doi":"10.1080/00031305.2023.2191670","DOIUrl":"https://doi.org/10.1080/00031305.2023.2191670","url":null,"abstract":"Abstract Detecting change-points in multivariate settings is usually carried out by analyzing all marginals either independently, via univariate methods, or jointly, through multivariate approaches. The former discards any inherent dependencies between different marginals and the latter may suffer from domination/masking among different change-points of distinct marginals. As a remedy, we propose an approach which groups marginals with similar temporal behaviors, and then performs group-wise multivariate change-point detection. Our approach groups marginals based on hierarchical clustering using distances which adjust for inherent dependencies. Through a simulation study we show that our approach, by preventing domination/masking, significantly enhances the general performance of the employed multivariate change-point detection method. Finally, we apply our approach to two datasets: (i) Land Surface Temperature in Spain, during the years 2000–2021, and (ii) The WikiLeaks Afghan War Diary data.","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126166063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Comment on “A Case for Nonparametrics” by Bower et al. 对Bower等人的“非参数的一种情况”的评论。
Pub Date : 2023-04-03 DOI: 10.1080/00031305.2023.2172078
K. Rice, T. Lumley
While we welcome Bower et al.’s (2022) exploration of how teaching need not rely on parametric methods alone, we wish to raise some issues with their presentation. First, “nonparametric”—meaning methods that do not rely on an assumed form of frequency distribution—is not synonymous with “rank-based.” When, as in Bower et al.’s (2022) examples, interest lies in differences in mean or median between groups, we could nonparametrically use permutation tests with test statistics that—straightforwardly—describe differences in sample mean or median between groups. These exact approaches use no assumptions other than independence of the outcomes (Berry, Johnston, and Mielke 2019, sec. 3.3) and that the test statistic being used captures deviations from the null that are of some scientific interest. So there is no need to switch to less-relevant rank-based methods, much less present them as the natural alternative to being parametric. We do agree with Bower et al. (2022) that user-friendly implementations are important, but permutation tests are available via simple R commands (e.g., the coin package’s oneway_test() function (Hothorn et al. 2008)) and shiny applications that are built on them. Second, the Kruskal-Wallis and Wilcoxon tests are not tests for population mean rank in the same sense that ANOVA and the t-test are tests for the mean, or Mood’s test is a test for the median. The issue is that the population mean and median for a subgroup are defined by the distribution of the response in just that subgroup. The mean rank, in contrast, depends on the distribution of responses in all subgroups and their sample sizes, so whether group 1 has higher mean rank than group 2 can depend on which other groups are also in the dataset. When the groups are not stochastically ordered, this leads to surprisingly complicated behavior of the Kruskal-Wallis test (Brown and Hettmansperger 2002). Finally, with regard to pedagogy we recommend the Data Problem-Solving Cycle (Wild and Pfannkuch 1999) in which the first step, “Problem,” identifies the question to address. This
虽然我们欢迎Bower等人(2022)对教学如何不需要单独依赖参数方法的探索,但我们希望对他们的陈述提出一些问题。首先,“非参数”——即不依赖于假设形式的频率分布的方法——与“基于秩的”不是同义词。在Bower等人(2022)的例子中,当兴趣在于组间均值或中位数的差异时,我们可以非参数地使用排列检验,其检验统计量可以直接描述组间样本均值或中位数的差异。这些精确的方法除了结果的独立性之外不使用任何假设(Berry, Johnston, and Mielke 2019,第3.3节),并且所使用的检验统计量捕获了具有一定科学意义的零值偏差。因此,没有必要切换到不太相关的基于排名的方法,更不用说将它们作为参数化的自然替代方案了。我们确实同意Bower等人(2022)的观点,即用户友好的实现很重要,但排列测试可以通过简单的R命令(例如,coin包的oneway_test()函数(Hothorn等人,2008))和基于它们构建的光鲜的应用程序来实现。其次,Kruskal-Wallis和Wilcoxon检验不是总体平均秩的检验,而ANOVA和t检验是对平均值的检验,或者Mood检验是对中位数的检验。问题是,子组的总体均值和中位数是由该子组的响应分布定义的。相比之下,平均排名取决于所有子组中的响应分布及其样本量,因此,第一组的平均排名是否高于第二组,可能取决于数据集中还有哪些其他组。当群体不是随机排序时,这就导致了令人惊讶的复杂的Kruskal-Wallis测试行为(Brown and Hettmansperger 2002)。最后,关于教学法,我们推荐数据问题解决周期(Wild and Pfannkuch 1999),其中第一步,“问题”,确定要解决的问题。这
{"title":"Comment on “A Case for Nonparametrics” by Bower et al.","authors":"K. Rice, T. Lumley","doi":"10.1080/00031305.2023.2172078","DOIUrl":"https://doi.org/10.1080/00031305.2023.2172078","url":null,"abstract":"While we welcome Bower et al.’s (2022) exploration of how teaching need not rely on parametric methods alone, we wish to raise some issues with their presentation. First, “nonparametric”—meaning methods that do not rely on an assumed form of frequency distribution—is not synonymous with “rank-based.” When, as in Bower et al.’s (2022) examples, interest lies in differences in mean or median between groups, we could nonparametrically use permutation tests with test statistics that—straightforwardly—describe differences in sample mean or median between groups. These exact approaches use no assumptions other than independence of the outcomes (Berry, Johnston, and Mielke 2019, sec. 3.3) and that the test statistic being used captures deviations from the null that are of some scientific interest. So there is no need to switch to less-relevant rank-based methods, much less present them as the natural alternative to being parametric. We do agree with Bower et al. (2022) that user-friendly implementations are important, but permutation tests are available via simple R commands (e.g., the coin package’s oneway_test() function (Hothorn et al. 2008)) and shiny applications that are built on them. Second, the Kruskal-Wallis and Wilcoxon tests are not tests for population mean rank in the same sense that ANOVA and the t-test are tests for the mean, or Mood’s test is a test for the median. The issue is that the population mean and median for a subgroup are defined by the distribution of the response in just that subgroup. The mean rank, in contrast, depends on the distribution of responses in all subgroups and their sample sizes, so whether group 1 has higher mean rank than group 2 can depend on which other groups are also in the dataset. When the groups are not stochastically ordered, this leads to surprisingly complicated behavior of the Kruskal-Wallis test (Brown and Hettmansperger 2002). Finally, with regard to pedagogy we recommend the Data Problem-Solving Cycle (Wild and Pfannkuch 1999) in which the first step, “Problem,” identifies the question to address. This","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131096062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Response to Rice and Lumley 对赖斯和拉姆利的回应
Pub Date : 2023-04-03 DOI: 10.1080/00031305.2023.2182362
Roy Bower, William Cipolli
We recognize the careful reading of and thought-provoking commentary on our work by Rice and Lumley. Further, we appreciate the opportunity to respond and clarify our position regarding the three presented concerns. We address these points in three sections below and conclude with final remarks in Section 4.
我们认可Rice和Lumley对我们工作的仔细阅读和发人深省的评论。此外,我们感谢有机会对提出的三个关切作出回应和澄清我们的立场。我们将在下面的三个部分中讨论这些问题,并在第4节中以最后的评论结束。
{"title":"A Response to Rice and Lumley","authors":"Roy Bower, William Cipolli","doi":"10.1080/00031305.2023.2182362","DOIUrl":"https://doi.org/10.1080/00031305.2023.2182362","url":null,"abstract":"We recognize the careful reading of and thought-provoking commentary on our work by Rice and Lumley. Further, we appreciate the opportunity to respond and clarify our position regarding the three presented concerns. We address these points in three sections below and conclude with final remarks in Section 4.","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132240624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping life expectancy loss in Barcelona in 2020 绘制2020年巴塞罗那预期寿命损失图
Pub Date : 2023-04-03 DOI: 10.1080/00031305.2023.2197022
X. Puig, J. Ginebra
We use a Bayesian spatio-temporal model, first to smooth small-area initial life expectancy estimates in Barcelona for 2020, and second to predict what small-area life expectancy would have been in 2020 in absence of covid-19 using mortality data from 2007 to 2019. This allows us to estimate and map the small-area life expectancy loss, which can be used to assess how the impact of covid-19 varies spatially, and to explore whether that loss relates to underlying factors, such as population density, educational level, or proportion of older individuals living alone. We find that the small-area life expectancy loss for men and for women have similar distributions, and are spatially uncorrelated but positively correlated with population density and among themselves. On average, we estimate that the life expectancy loss in Barcelona in 2020 was of 2.01 years for men, falling back to 2011 levels, and of 2.11 years for women, falling back to 2006 levels.
我们使用贝叶斯时空模型,首先平滑了2020年巴塞罗那小区域的初始预期寿命估算,然后使用2007年至2019年的死亡率数据预测2020年没有covid-19的小区域预期寿命。这使我们能够估计和绘制小区域预期寿命损失的地图,可用于评估covid-19的影响如何在空间上变化,并探讨这种损失是否与人口密度、教育水平或独居老年人比例等潜在因素有关。我们发现男性和女性的小区域预期寿命损失具有相似的分布,并且在空间上不相关,但与人口密度和他们之间呈正相关。平均而言,我们估计2020年巴塞罗那男性的预期寿命损失为2.01岁,回落至2011年的水平,女性的预期寿命损失为2.11岁,回落至2006年的水平。
{"title":"Mapping life expectancy loss in Barcelona in 2020","authors":"X. Puig, J. Ginebra","doi":"10.1080/00031305.2023.2197022","DOIUrl":"https://doi.org/10.1080/00031305.2023.2197022","url":null,"abstract":"We use a Bayesian spatio-temporal model, first to smooth small-area initial life expectancy estimates in Barcelona for 2020, and second to predict what small-area life expectancy would have been in 2020 in absence of covid-19 using mortality data from 2007 to 2019. This allows us to estimate and map the small-area life expectancy loss, which can be used to assess how the impact of covid-19 varies spatially, and to explore whether that loss relates to underlying factors, such as population density, educational level, or proportion of older individuals living alone. We find that the small-area life expectancy loss for men and for women have similar distributions, and are spatially uncorrelated but positively correlated with population density and among themselves. On average, we estimate that the life expectancy loss in Barcelona in 2020 was of 2.01 years for men, falling back to 2011 levels, and of 2.11 years for women, falling back to 2006 levels.","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"537 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126128164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handbook of Multiple Comparisons 多重比较手册
Pub Date : 2023-04-03 DOI: 10.1080/00031305.2023.2198355
Junyong Park
Finite population sampling has found numerous applications in the past century. Validity inference of real populations is possible based on known sampling probabilities, “irrespectively of the unknown properties of the target population studied” (Neyman, 1934). Graphs allow one to incorporate the connections among the population units in addition. Many socio-economic, biological, spatial, or technological phenomena exhibit an underlying graph structure that may be the central interest of study, or the edges may effectively provide access to those nodes that are the primary targets. Either way, graph sampling provides a universally valid approach to studying realvalued graphs. This book establishes a rigorous conceptual framework for graph sampling and gives a unified presentation of much of the existing theory and methods, including several of the most recent developments. The most central concepts are introduced in Chapter 1, such as graph totals and parameters as targets of estimation, observation procedures following an initial sample of nodes that drive graph sampling, sample graph in which different kinds of induced subgraphs (such as edge, triangle, 4circle, K-star) can be observed, and graph sampling strategy consisting of a sampling method and an associated estimator. Chapters 2–4 introduce strategies based on bipartite graph sampling and incidence weighting estimator, which encompass all the existing unconventional finite population sampling methods, including indirect, network, adaptive cluster, or line intercept sampling. This can help to raise awareness of these methods, allowing them to be more effectively studied and applied as cases of graph sampling. For instance, Chapter 4 considers how to apply adaptive network sampling in a situation like the covid outbreak, which allows one to combat the virus spread by testtrace and to estimate the prevalence at the same time, provided the necessary elements of probability design and observation procedure are implemented. Chapters 5 and 6 deal with snowball sampling and targeted random walk sampling, respectively, which can be regarded as probabilistic breath-first or depth-first non-exhaustive search methods in graphs. Novel approaches to sampling strategies are developed and illustrated, such as how to account for the fact that an observed triangle could have been observed in many other ways that remain hidden from the realized sample graph, or how to estimate a parameter related to certain finiteorder subgraphs (such as a triangle) based on a random walk in the graph. The Bibliographic Notes at the end of each chapter contain some reflections on sources of inspiration, motivations for chosen approaches, and topics for future development. I found that the contents of the book are highly innovative and useful. The indirect sampling of Lavillee (2007) can be viewed as a special case of graph sampling. The materials in adaptive cluster sampling should be very useful in many real-world sampling
有限总体抽样在过去的一个世纪里得到了许多应用。根据已知的抽样概率,“与所研究的目标群体的未知特性无关”,可以对真实群体进行有效性推断(Neyman, 1934)。图表还允许人们将人口单位之间的联系结合起来。许多社会经济、生物、空间或技术现象表现出一种潜在的图结构,这可能是研究的中心兴趣,或者边缘可能有效地提供通往那些节点的通道,这些节点是主要目标。无论哪种方式,图采样都提供了一种普遍有效的方法来研究重估图。本书为图采样建立了一个严格的概念框架,并给出了许多现有理论和方法的统一介绍,包括一些最新的发展。在第1章中介绍了最核心的概念,例如作为估计目标的图总数和参数,驱动图采样的节点初始样本后的观察过程,可以观察到不同类型的诱导子图(如边,三角形,4circle, K-star)的样本图,以及由采样方法和相关估计器组成的图采样策略。第2-4章介绍了基于二部图采样和关联加权估计的策略,其中包括所有现有的非常规有限总体采样方法,包括间接采样,网络采样,自适应聚类采样或线截采样。这有助于提高对这些方法的认识,使它们能够更有效地作为图采样的案例进行研究和应用。例如,第4章考虑了如何在像covid爆发这样的情况下应用自适应网络抽样,它允许人们通过测试跟踪来对抗病毒传播,同时估计患病率,提供了概率设计和观察程序的必要元素。第5章和第6章分别讨论了雪球抽样和目标随机漫步抽样,它们可以看作是概率呼吸优先或深度优先的图中的非穷举搜索方法。开发并说明了采样策略的新方法,例如如何解释观察到的三角形可以以许多其他方式观察到的事实,这些方式仍然隐藏在实现的样本图中,或者如何基于图中的随机游走估计与某些有限阶子图(如三角形)相关的参数。每章末尾的参考书目注释包含对灵感来源、选择方法的动机和未来发展主题的一些反思。我发现这本书的内容很有创意,也很有用。Lavillee(2007)的间接抽样可以看作是图抽样的一个特例。自适应聚类抽样中的资料在许多实际的抽样问题中应该是非常有用的。有些材料尚未在其他地方出版。现代抽样研究课题,如受访者驱动抽样或强化学习,可以被视为一个图抽样问题。从这个意义上说,图采样可以成为采样的未来。然而,书中的解释有些简洁。更多的例子和上下文将帮助我们理解这些概念。此外,基于设计的框架假定条件包含概率是事先已知的。如果作者能够涵盖这些包含概率是估计而不是已知的情况,那就太好了。此外,关于实际应用的章节将帮助读者更好地理解材料。我希望这本书的第二版中包含这些内容。无论如何,在这个领域还有很多需要探索的地方,这本书可以成为图采样之旅的一个很好的指南。我打算把这本书作为我在爱荷华州立大学高级调查抽样课程的参考。
{"title":"Handbook of Multiple Comparisons","authors":"Junyong Park","doi":"10.1080/00031305.2023.2198355","DOIUrl":"https://doi.org/10.1080/00031305.2023.2198355","url":null,"abstract":"Finite population sampling has found numerous applications in the past century. Validity inference of real populations is possible based on known sampling probabilities, “irrespectively of the unknown properties of the target population studied” (Neyman, 1934). Graphs allow one to incorporate the connections among the population units in addition. Many socio-economic, biological, spatial, or technological phenomena exhibit an underlying graph structure that may be the central interest of study, or the edges may effectively provide access to those nodes that are the primary targets. Either way, graph sampling provides a universally valid approach to studying realvalued graphs. This book establishes a rigorous conceptual framework for graph sampling and gives a unified presentation of much of the existing theory and methods, including several of the most recent developments. The most central concepts are introduced in Chapter 1, such as graph totals and parameters as targets of estimation, observation procedures following an initial sample of nodes that drive graph sampling, sample graph in which different kinds of induced subgraphs (such as edge, triangle, 4circle, K-star) can be observed, and graph sampling strategy consisting of a sampling method and an associated estimator. Chapters 2–4 introduce strategies based on bipartite graph sampling and incidence weighting estimator, which encompass all the existing unconventional finite population sampling methods, including indirect, network, adaptive cluster, or line intercept sampling. This can help to raise awareness of these methods, allowing them to be more effectively studied and applied as cases of graph sampling. For instance, Chapter 4 considers how to apply adaptive network sampling in a situation like the covid outbreak, which allows one to combat the virus spread by testtrace and to estimate the prevalence at the same time, provided the necessary elements of probability design and observation procedure are implemented. Chapters 5 and 6 deal with snowball sampling and targeted random walk sampling, respectively, which can be regarded as probabilistic breath-first or depth-first non-exhaustive search methods in graphs. Novel approaches to sampling strategies are developed and illustrated, such as how to account for the fact that an observed triangle could have been observed in many other ways that remain hidden from the realized sample graph, or how to estimate a parameter related to certain finiteorder subgraphs (such as a triangle) based on a random walk in the graph. The Bibliographic Notes at the end of each chapter contain some reflections on sources of inspiration, motivations for chosen approaches, and topics for future development. I found that the contents of the book are highly innovative and useful. The indirect sampling of Lavillee (2007) can be viewed as a special case of graph sampling. The materials in adaptive cluster sampling should be very useful in many real-world sampling","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127908484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Sampling 图抽样
Pub Date : 2023-04-03 DOI: 10.1080/00031305.2023.2198354
Jae-Kwang Kim
{"title":"Graph Sampling","authors":"Jae-Kwang Kim","doi":"10.1080/00031305.2023.2198354","DOIUrl":"https://doi.org/10.1080/00031305.2023.2198354","url":null,"abstract":"","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bartroff, J., Lorden, G. and Wang, L. (2022), “Optimal and Fast Confidence Intervals for Hypergeometric Successes,” The American Statistician: Comment by Schilling Bartroff, J, Lorden, G.和Wang, L.(2022),“超几何成功的最佳和快速置信区间”,美国统计学家:Schilling评论
Pub Date : 2023-03-30 DOI: 10.1080/00031305.2023.2197021
M. Schilling
The article “Optimal and Fast Confidence Intervals for Hypergeometric Successes” by Bartroff, J., Lorden, G. and Wang, L. (BLW) develops a procedure for interval estimation of the number of successes M in a finite population based on constructing minimal length symmetrical acceptance intervals, which are inverted to determine confidence intervals based on the number of successes x obtained from a sample of size n. The authors compare their procedure to previously developed methods derived from the method of pivoting (Buonaccorsi 1987; Konijn 1973; Casella and Berger 2002, chap. 9) as well as to the more recent work of Wang (2015), and show that their approach generally leads to substantially shorter confidence intervals than those of these competitors, while frequently achieving higher coverage. However, the present authors BLW were evidently unaware of my recent paper with A. Stanley, “A New Approach to Precise Interval Estimation for the Parameters of the Hypergeometric Distribution” (Schilling and Stanley 2020), which solved the problem of constructing a hypergeometric confidence procedure that has minimal length (that is, minimal total cardinality of the confidence intervals for x = 0 to n), while maximizing coverage among all length minimizing procedures. We also compared our method to the same competitors as those listed above, as well as to one that can be obtained from Blaker’s (2000) method, and we showed the superiority in performance of our procedure. The two goals of our paper—length minimization and maximal coverage—are the same as those in BLW’s paper, and BLW’s approach matches rather closely with ours. The authors’ “α optimal” is our “minimal cardinality,” while our “maximal coverage” is BLW’s “PM-maximizing.” The only substantive difference between the two confidence procedures is that BLW’s specifies symmetrical acceptance sets, while ours does not. This affects only a small number of confidence intervals. An investigation of all 95% confidence intervals for each population size N between 5 and 100 and sample sizes n = 5, 10, . . . with n ≤ N finds that BLW’s confidence intervals are identical to ours in 99.43% of the 34,200 intervals checked. When they are different, the BLW
Bartroff, J., Lorden, G.和Wang, L. (BLW)的文章“超几何成功的最优和快速置信区间”开发了一个基于构造最小长度对称接受区间的有限种群中成功数M的区间估计过程。将其倒置以确定基于从大小为n的样本中获得的成功次数x的置信区间。作者将他们的程序与先前开发的从pivot方法派生的方法进行了比较(Buonaccorsi 1987;Konijn 1973;Casella和Berger 2002,第9章)以及Wang(2015)的最新工作,并表明他们的方法通常比这些竞争对手的方法产生更短的置信区间,同时经常实现更高的覆盖率。然而,目前的作者BLW显然没有意识到我最近与A. Stanley合著的论文,“超几何分布参数精确区间估计的新方法”(Schilling和Stanley 2020),该论文解决了构造具有最小长度(即x = 0到n的置信区间的最小总基数)的超几何置信过程的问题,同时在所有长度最小化过程中最大化覆盖。我们还将我们的方法与上面列出的竞争对手进行了比较,并与布莱克(2000)的方法进行了比较,结果表明我们的方法在性能上具有优越性。我们的论文长度最小化和最大覆盖率的两个目标与BLW的论文相同,BLW的方法与我们的方法非常接近。作者的“α最优”是我们的“最小基数”,而我们的“最大覆盖”是BLW的“pm最大化”。两种信任程序之间的唯一实质性区别是BLW指定对称接受集,而我们的没有。这只会影响一小部分置信区间。对每个人口规模N在5和100之间,样本量N = 5,10,…的所有95%置信区间进行调查。n≤n发现在检查的34200个区间中,99.43%的BLW的置信区间与我们的相同。当它们不同时,BLW
{"title":"Bartroff, J., Lorden, G. and Wang, L. (2022), “Optimal and Fast Confidence Intervals for Hypergeometric Successes,” The American Statistician: Comment by Schilling","authors":"M. Schilling","doi":"10.1080/00031305.2023.2197021","DOIUrl":"https://doi.org/10.1080/00031305.2023.2197021","url":null,"abstract":"The article “Optimal and Fast Confidence Intervals for Hypergeometric Successes” by Bartroff, J., Lorden, G. and Wang, L. (BLW) develops a procedure for interval estimation of the number of successes M in a finite population based on constructing minimal length symmetrical acceptance intervals, which are inverted to determine confidence intervals based on the number of successes x obtained from a sample of size n. The authors compare their procedure to previously developed methods derived from the method of pivoting (Buonaccorsi 1987; Konijn 1973; Casella and Berger 2002, chap. 9) as well as to the more recent work of Wang (2015), and show that their approach generally leads to substantially shorter confidence intervals than those of these competitors, while frequently achieving higher coverage. However, the present authors BLW were evidently unaware of my recent paper with A. Stanley, “A New Approach to Precise Interval Estimation for the Parameters of the Hypergeometric Distribution” (Schilling and Stanley 2020), which solved the problem of constructing a hypergeometric confidence procedure that has minimal length (that is, minimal total cardinality of the confidence intervals for x = 0 to n), while maximizing coverage among all length minimizing procedures. We also compared our method to the same competitors as those listed above, as well as to one that can be obtained from Blaker’s (2000) method, and we showed the superiority in performance of our procedure. The two goals of our paper—length minimization and maximal coverage—are the same as those in BLW’s paper, and BLW’s approach matches rather closely with ours. The authors’ “α optimal” is our “minimal cardinality,” while our “maximal coverage” is BLW’s “PM-maximizing.” The only substantive difference between the two confidence procedures is that BLW’s specifies symmetrical acceptance sets, while ours does not. This affects only a small number of confidence intervals. An investigation of all 95% confidence intervals for each population size N between 5 and 100 and sample sizes n = 5, 10, . . . with n ≤ N finds that BLW’s confidence intervals are identical to ours in 99.43% of the 34,200 intervals checked. When they are different, the BLW","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132865500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Linearity of Unbiased Linear Model Estimators 修正:无偏线性模型估计器的线性
Pub Date : 2023-03-27 DOI: 10.1080/00031305.2023.2184423
The author presented a proof that a regression estimator unbiased for all distributions in a sufficiently broad family F0 must be linear. The family was taken to consist of the convolutions of all two-point distributions with a scale-family of smooth densities tending to a point mass at zero. The basic calculations were based solely on the discrete distributions, but the convolutions were introduced so that the family would be a subset of some standard, smooth nonparametric families. For example, adaptive estimation requires enough smoothness so that the Cramér-Rao bound provides optimal asymptotics. Some further comments appear in the supplemental material. The proof required that convergence of the smooth densities to zero implied that the expectations under the smooth convolutions converged to the expectation under the two-point distribution. This requires that the estimate be continuous, and there was a major error in the proof that unbiasedness implies continuity. It appears that continuity cannot be proved using unbiasedness over F0 , but it can be proved using unbiasedness over families of discrete distributions (see supplemental material). Thus, either the estimator must be assumed to be continuous, or a result using only discrete distributions is required. In trying to correct the error, a much simpler proof of the linearity result was found. This proof takes F0 to consist only of discrete distributions. The details are also presented in the supplemental material, but the basic idea is relatively simple: Consider a simplex with center at zero. Each point in the simplex (say −y ) is a convex combination of the vertices of the simplex; that is, −y is the expectation for a (discrete) distribution putting probability pi on vertex zi . Thus, the distribution putting probability 2 on y and 1 2 pi on zi has mean zero. Therefore, by unbiasedness, T(y) is a convex combination of the {T(zi)} ; that is, T(y) is the matrix whose columns are T(zi) times a vector, p , of probabilities. By basic properties of a simplex, p is an affine function of −y ; and so T(y) is an affine function of y . By unbiasedness under a point mass at zero, T(0) = 0 ; and so the affine constant is zero and T must be linear.
证明了对足够宽的族F0中所有分布无偏的回归估计量必须是线性的。这个族被认为是由所有两点分布的卷积组成的,平滑密度的尺度族趋向于一个点的质量为零。基本的计算完全基于离散分布,但是引入了卷积,使得这个族成为一些标准的,光滑的非参数族的子集。例如,自适应估计需要足够的平滑性,以便cram r- rao界提供最优渐近性。补充材料中出现了一些进一步的评论。证明要求平滑密度收敛于零意味着平滑卷积下的期望收敛于两点分布下的期望。这要求估计是连续的,在证明无偏性意味着连续性时存在一个重大错误。看来连续性不能用F0上的无偏性来证明,但可以用离散分布族上的无偏性来证明(见补充材料)。因此,要么必须假设估计量是连续的,要么只需要使用离散分布的结果。在试图纠正错误的过程中,发现了线性结果的一个简单得多的证明。这个证明需要F0只由离散分布组成。细节也在补充材料中提供,但基本思想相对简单:考虑一个中心为零的单纯形。单纯形中的每个点(例如- y)是单纯形顶点的凸组合;也就是说,- y是一个(离散)分布的期望,概率是PI在顶点zi上。因此,y上概率为2,zi上概率为12的分布均值为0。因此,根据无偏性,T(y)是{T(zi)}的凸组合;也就是说,T(y)是一个矩阵,它的列是T(zi)乘以一个概率向量p。根据单纯形的基本性质,p是- y的仿射函数;所以T(y)是y的仿射函数。通过零点质量下的无偏性,T(0) = 0;所以仿射常数为零,T一定是线性的。
{"title":"Correction: Linearity of Unbiased Linear Model Estimators","authors":"","doi":"10.1080/00031305.2023.2184423","DOIUrl":"https://doi.org/10.1080/00031305.2023.2184423","url":null,"abstract":"The author presented a proof that a regression estimator unbiased for all distributions in a sufficiently broad family F0 must be linear. The family was taken to consist of the convolutions of all two-point distributions with a scale-family of smooth densities tending to a point mass at zero. The basic calculations were based solely on the discrete distributions, but the convolutions were introduced so that the family would be a subset of some standard, smooth nonparametric families. For example, adaptive estimation requires enough smoothness so that the Cramér-Rao bound provides optimal asymptotics. Some further comments appear in the supplemental material. The proof required that convergence of the smooth densities to zero implied that the expectations under the smooth convolutions converged to the expectation under the two-point distribution. This requires that the estimate be continuous, and there was a major error in the proof that unbiasedness implies continuity. It appears that continuity cannot be proved using unbiasedness over F0 , but it can be proved using unbiasedness over families of discrete distributions (see supplemental material). Thus, either the estimator must be assumed to be continuous, or a result using only discrete distributions is required. In trying to correct the error, a much simpler proof of the linearity result was found. This proof takes F0 to consist only of discrete distributions. The details are also presented in the supplemental material, but the basic idea is relatively simple: Consider a simplex with center at zero. Each point in the simplex (say −y ) is a convex combination of the vertices of the simplex; that is, −y is the expectation for a (discrete) distribution putting probability pi on vertex zi . Thus, the distribution putting probability 2 on y and 1 2 pi on zi has mean zero. Therefore, by unbiasedness, T(y) is a convex combination of the {T(zi)} ; that is, T(y) is the matrix whose columns are T(zi) times a vector, p , of probabilities. By basic properties of a simplex, p is an affine function of −y ; and so T(y) is an affine function of y . By unbiasedness under a point mass at zero, T(0) = 0 ; and so the affine constant is zero and T must be linear.","PeriodicalId":342642,"journal":{"name":"The American Statistician","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123041885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The American Statistician
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1