Pub Date : 2023-12-18DOI: 10.1080/00401706.2023.2296451
Akhil Vakayil, V. Roshan Joseph
In this work, we propose a novel framework for large-scale Gaussian process (GP) modeling. Contrary to the global, and local approximations proposed in the literature to address the computational b...
{"title":"A Global-Local Approximation Framework for Large-Scale Gaussian Process Modeling","authors":"Akhil Vakayil, V. Roshan Joseph","doi":"10.1080/00401706.2023.2296451","DOIUrl":"https://doi.org/10.1080/00401706.2023.2296451","url":null,"abstract":"In this work, we propose a novel framework for large-scale Gaussian process (GP) modeling. Contrary to the global, and local approximations proposed in the literature to address the computational b...","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1080/00401706.2023.2281940
Yi Ji, Simon Mak, Derek Soeder, J-F Paquet, Steffen A. Bass
AbstractWith advances in scientific computing and mathematical modeling, complex scientific phenomena such as galaxy formations and rocket propulsion can now be reliably simulated. Such simulations can however be very time-intensive, requiring millions of CPU hours to perform. One solution is multi-fidelity emulation, which uses data of different fidelities to train an efficient predictive model which emulates the expensive simulator. For complex scientific problems and with careful elicitation from scientists, such multi-fidelity data may often be linked by a directed acyclic graph (DAG) representing its scientific model dependencies. We thus propose a new Graphical Multi-fidelity Gaussian Process (GMGP) model, which embeds this DAG structure (capturing scientific dependencies) within a Gaussian process framework. We show that the GMGP has desirable modeling traits via two Markov properties, and admits a scalable algorithm for recursive computation of the posterior mean and variance along at each depth level of the DAG. We also present a novel experimental design methodology over the DAG given an experimental budget, and propose a nonlinear extension of the GMGP via deep Gaussian processes. The advantages of the GMGP are then demonstrated via a suite of numerical experiments and an application to emulation of heavy-ion collisions, which can be used to study the conditions of matter in the Universe shortly after the Big Bang. The proposed model has broader uses in data fusion applications with graphical structure, which we further discuss.Keywords: Computer experimentsGaussian processesgraphical modelsnuclear physicsmulti-fidelity modelingDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.
{"title":"A graphical multi-fidelity Gaussian process model, with application to emulation of heavy-ion collisions","authors":"Yi Ji, Simon Mak, Derek Soeder, J-F Paquet, Steffen A. Bass","doi":"10.1080/00401706.2023.2281940","DOIUrl":"https://doi.org/10.1080/00401706.2023.2281940","url":null,"abstract":"AbstractWith advances in scientific computing and mathematical modeling, complex scientific phenomena such as galaxy formations and rocket propulsion can now be reliably simulated. Such simulations can however be very time-intensive, requiring millions of CPU hours to perform. One solution is multi-fidelity emulation, which uses data of different fidelities to train an efficient predictive model which emulates the expensive simulator. For complex scientific problems and with careful elicitation from scientists, such multi-fidelity data may often be linked by a directed acyclic graph (DAG) representing its scientific model dependencies. We thus propose a new Graphical Multi-fidelity Gaussian Process (GMGP) model, which embeds this DAG structure (capturing scientific dependencies) within a Gaussian process framework. We show that the GMGP has desirable modeling traits via two Markov properties, and admits a scalable algorithm for recursive computation of the posterior mean and variance along at each depth level of the DAG. We also present a novel experimental design methodology over the DAG given an experimental budget, and propose a nonlinear extension of the GMGP via deep Gaussian processes. The advantages of the GMGP are then demonstrated via a suite of numerical experiments and an application to emulation of heavy-ion collisions, which can be used to study the conditions of matter in the Universe shortly after the Big Bang. The proposed model has broader uses in data fusion applications with graphical structure, which we further discuss.Keywords: Computer experimentsGaussian processesgraphical modelsnuclear physicsmulti-fidelity modelingDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135291181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1080/00401706.2023.2277711
Suk Joo Bae, Byeong Min Mun, Xiaoyan Zhu
AbstractIn some practical circumstances, data are recorded after the systems have begun operations, and data collection is stopped at a predetermined time or after a predetermined number of failures. In such circumstances, incompleteness of various types exists in the aspect of the missing number of failures and their occurrence times beyond the duration of the pilot study. Additionally, multiple repairable systems may present system-to-system variability caused by differences in the operating environments or working loads of individual systems. With respect to left-truncated and right-censored recurrent failure data from multiple repairable systems, we propose a reliability model based on a proportional intensity model with frailty. The frailty model explicitly models unobserved heterogeneity among systems. Covariates incorporated into the proportional intensity model additionally account for the heterogeneity between different operating conditions. To estimate the model parameters for the left-truncated and right-censored recurrent failure data, a Monte Carlo expectation maximization algorithm is proposed. Details of the estimation of the model parameters and the construction of their confidence intervals are examined. A real-world example and simulation studies under various scenarios show prominent applications of the proportional intensity model with frailty to left-truncated and right-censored multiple repairable systems for reliability prediction.1Index Terms: Monte Carlo expectation maximization (MCEM) algorithmnonhomogeneous Poisson processrecurrent failure dataproportional intensity modelrepairable systemDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.
{"title":"A Proportional Intensity Model with Frailty for Missing Recurrent Failure Data","authors":"Suk Joo Bae, Byeong Min Mun, Xiaoyan Zhu","doi":"10.1080/00401706.2023.2277711","DOIUrl":"https://doi.org/10.1080/00401706.2023.2277711","url":null,"abstract":"AbstractIn some practical circumstances, data are recorded after the systems have begun operations, and data collection is stopped at a predetermined time or after a predetermined number of failures. In such circumstances, incompleteness of various types exists in the aspect of the missing number of failures and their occurrence times beyond the duration of the pilot study. Additionally, multiple repairable systems may present system-to-system variability caused by differences in the operating environments or working loads of individual systems. With respect to left-truncated and right-censored recurrent failure data from multiple repairable systems, we propose a reliability model based on a proportional intensity model with frailty. The frailty model explicitly models unobserved heterogeneity among systems. Covariates incorporated into the proportional intensity model additionally account for the heterogeneity between different operating conditions. To estimate the model parameters for the left-truncated and right-censored recurrent failure data, a Monte Carlo expectation maximization algorithm is proposed. Details of the estimation of the model parameters and the construction of their confidence intervals are examined. A real-world example and simulation studies under various scenarios show prominent applications of the proportional intensity model with frailty to left-truncated and right-censored multiple repairable systems for reliability prediction.1Index Terms: Monte Carlo expectation maximization (MCEM) algorithmnonhomogeneous Poisson processrecurrent failure dataproportional intensity modelrepairable systemDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135973291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-24DOI: 10.1080/00401706.2023.2271017
Dave Osthus, Brian P. Weaver, Lauren J. Beesley, Kelly R. Moran, Madeline A. Stricklin, Eric J. Zirnstein, Paul H. Janzen, Daniel B. Reisenfeld
AbstractThe Interstellar Boundary Explorer (IBEX) satellite has been in orbit since 2008 and detects energy-resolved energetic neutral atoms (ENAs) originating from the heliosphere. Different regions of the heliosphere generate ENAs at different rates. It is of scientific interest to take the data collected by IBEX and estimate spatial maps of heliospheric ENA rates (referred to as sky maps) at higher resolutions than before. These sky maps will subsequently be used to discern between competing theories of heliosphere properties that are not currently possible. The data IBEX collects present challenges to sky map estimation. The two primary challenges are noisy and irregularly spaced data collection and the IBEX instrumentation’s point spread function. In essence, the data collected by IBEX are both noisy and biased for the underlying sky map of inferential interest. In this paper, we present a two-stage sky map estimation procedure called Theseus. In Stage 1, Theseus estimates a blurred sky map from the noisy and irregularly spaced data using an ensemble approach that leverages projection pursuit regression and generalized additive models. In Stage 2, Theseus deblurs the sky map by deconvolving the PSF with the blurred map using regularization. Unblurred sky map uncertainties are computed via bootstrapping. We compare Theseus to a method closely related to the one operationally used today by the IBEX Science Operation Center (ISOC) on both simulated and real data. Theseus outperforms ISOC in nearly every considered metric on simulated data, indicating that Theseus is an improvement over the current state of the art.DisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.
{"title":"Towards Improved Heliosphere Sky Map Estimation with Theseus","authors":"Dave Osthus, Brian P. Weaver, Lauren J. Beesley, Kelly R. Moran, Madeline A. Stricklin, Eric J. Zirnstein, Paul H. Janzen, Daniel B. Reisenfeld","doi":"10.1080/00401706.2023.2271017","DOIUrl":"https://doi.org/10.1080/00401706.2023.2271017","url":null,"abstract":"AbstractThe Interstellar Boundary Explorer (IBEX) satellite has been in orbit since 2008 and detects energy-resolved energetic neutral atoms (ENAs) originating from the heliosphere. Different regions of the heliosphere generate ENAs at different rates. It is of scientific interest to take the data collected by IBEX and estimate spatial maps of heliospheric ENA rates (referred to as sky maps) at higher resolutions than before. These sky maps will subsequently be used to discern between competing theories of heliosphere properties that are not currently possible. The data IBEX collects present challenges to sky map estimation. The two primary challenges are noisy and irregularly spaced data collection and the IBEX instrumentation’s point spread function. In essence, the data collected by IBEX are both noisy and biased for the underlying sky map of inferential interest. In this paper, we present a two-stage sky map estimation procedure called Theseus. In Stage 1, Theseus estimates a blurred sky map from the noisy and irregularly spaced data using an ensemble approach that leverages projection pursuit regression and generalized additive models. In Stage 2, Theseus deblurs the sky map by deconvolving the PSF with the blurred map using regularization. Unblurred sky map uncertainties are computed via bootstrapping. We compare Theseus to a method closely related to the one operationally used today by the IBEX Science Operation Center (ISOC) on both simulated and real data. Theseus outperforms ISOC in nearly every considered metric on simulated data, indicating that Theseus is an improvement over the current state of the art.DisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135266422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-18DOI: 10.1080/00401706.2023.2271091
Zheng Zhou, Zebin Yang, Aijun Zhang, Yongdao Zhou
AbstractSubsampling plays a crucial role in tackling problems associated with the storage and statistical learning of massive datasets. However, most existing subsampling methods are model-based, which means their performances can drop significantly when the underlying model is misspecified. Such an issue calls for model-free subsampling methods that are robust under diverse model specifications. Recently, several model-free subsampling methods are developed. However, the computing time of these methods grows explosively with the sample size, making them impractical for handling massive data. In this paper, an efficient model-free subsampling method is proposed, which segments the original data into some regular data blocks and obtains subsamples from each data block by the data-driven subsampling method. Compared with existing model-free subsampling methods, the proposed method has a significant speed advantage and performs more robustly for datasets with complex underlying distributions. As demonstrated in simulation experiments, the proposed method is an order of magnitude faster than other commonly used model-free subsampling methods when the sample size of the original dataset reaches the order of 107. Moreover, simulation experiments and case studies show that the proposed method is more robust than other model-free subsampling methods under diverse model specifications and subsample sizes.Keywords: Big data subsamplingModel robustnessParallel computingUniform designsDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.
{"title":"Efficient Model-free Subsampling Method for Massive Data","authors":"Zheng Zhou, Zebin Yang, Aijun Zhang, Yongdao Zhou","doi":"10.1080/00401706.2023.2271091","DOIUrl":"https://doi.org/10.1080/00401706.2023.2271091","url":null,"abstract":"AbstractSubsampling plays a crucial role in tackling problems associated with the storage and statistical learning of massive datasets. However, most existing subsampling methods are model-based, which means their performances can drop significantly when the underlying model is misspecified. Such an issue calls for model-free subsampling methods that are robust under diverse model specifications. Recently, several model-free subsampling methods are developed. However, the computing time of these methods grows explosively with the sample size, making them impractical for handling massive data. In this paper, an efficient model-free subsampling method is proposed, which segments the original data into some regular data blocks and obtains subsamples from each data block by the data-driven subsampling method. Compared with existing model-free subsampling methods, the proposed method has a significant speed advantage and performs more robustly for datasets with complex underlying distributions. As demonstrated in simulation experiments, the proposed method is an order of magnitude faster than other commonly used model-free subsampling methods when the sample size of the original dataset reaches the order of 107. Moreover, simulation experiments and case studies show that the proposed method is more robust than other model-free subsampling methods under diverse model specifications and subsample sizes.Keywords: Big data subsamplingModel robustnessParallel computingUniform designsDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135884639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-16DOI: 10.1080/00401706.2023.2271060
Zihan Zhang, Shancong Mou, Kamran Paynabar, Jianjun Shi
AbstractIn advanced manufacturing processes, high-dimensional (HD) streaming data (e.g., sequential images or videos) are commonly used to provide online measurements of product quality. Although there exist numerous research studies for monitoring and anomaly detection using HD streaming data, little research is conducted on feedback control based on HD streaming data to improve product quality, especially in the presence of incomplete responses. To address this challenge, this paper proposes a novel tensor-based automatic control method for partially observed HD streaming data, which consists of two stages: offline modeling and online control. In the offline modeling stage, we propose a one-step approach integrating parameter estimation of the system model with missing value imputation for the response data. This approach (i) improves the accuracy of parameter estimation, and (ii) maintains a stable and superior imputation performance in a wider range of the rank or missing ratio for the data to be completed, compared to the existing data completion methods. In the online control stage, for each incoming sample, missing observations are imputed by balancing its low-rank information and the one-step-ahead prediction result based on the control action from the last time step. Then, the optimal control action is computed by minimizing a quadratic loss function on the sum of squared deviations from the target. Furthermore, we conduct two sets of simulations and one case study on semiconductor manufacturing to validate the superiority of the proposed framework.Keywords: Streaming DataHigh DimensionTensorFeedback ControlPartial ObservationDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.
{"title":"Tensor-based Temporal Control for Partially Observed High-dimensional Streaming Data","authors":"Zihan Zhang, Shancong Mou, Kamran Paynabar, Jianjun Shi","doi":"10.1080/00401706.2023.2271060","DOIUrl":"https://doi.org/10.1080/00401706.2023.2271060","url":null,"abstract":"AbstractIn advanced manufacturing processes, high-dimensional (HD) streaming data (e.g., sequential images or videos) are commonly used to provide online measurements of product quality. Although there exist numerous research studies for monitoring and anomaly detection using HD streaming data, little research is conducted on feedback control based on HD streaming data to improve product quality, especially in the presence of incomplete responses. To address this challenge, this paper proposes a novel tensor-based automatic control method for partially observed HD streaming data, which consists of two stages: offline modeling and online control. In the offline modeling stage, we propose a one-step approach integrating parameter estimation of the system model with missing value imputation for the response data. This approach (i) improves the accuracy of parameter estimation, and (ii) maintains a stable and superior imputation performance in a wider range of the rank or missing ratio for the data to be completed, compared to the existing data completion methods. In the online control stage, for each incoming sample, missing observations are imputed by balancing its low-rank information and the one-step-ahead prediction result based on the control action from the last time step. Then, the optimal control action is computed by minimizing a quadratic loss function on the sum of squared deviations from the target. Furthermore, we conduct two sets of simulations and one case study on semiconductor manufacturing to validate the superiority of the proposed framework.Keywords: Streaming DataHigh DimensionTensorFeedback ControlPartial ObservationDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136114209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/00401706.2023.2262896
Abdulkadir Hussein
{"title":"Post-Shrinkage Strategies in Statistical and Machine Learning for High Dimensional DataPost-Shrinkage Strategies in Statistical and Machine Learning for High Dimensional Data, Syed Ejaz Ahmed, Feryaal Ahmed, and Bahadir Yüzbaşı, New York: Chapman and Hall/CRC Press, 2023, 408 pp., ISBN 9780367763442","authors":"Abdulkadir Hussein","doi":"10.1080/00401706.2023.2262896","DOIUrl":"https://doi.org/10.1080/00401706.2023.2262896","url":null,"abstract":"","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135948235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/00401706.2023.2262891
Aszani Aszani
{"title":"Machine Learning for Knowledge Discovery with R: Methodologies for Modeling, Inference, and PredictionKao-Tai Tsai, Boca Raton, FL: CRC Press, Taylor & Francis Group, LLC, 2022, xiii + 260 pp., $ 88.00, ISBN: 978-1-032-06536-6 (H)","authors":"Aszani Aszani","doi":"10.1080/00401706.2023.2262891","DOIUrl":"https://doi.org/10.1080/00401706.2023.2262891","url":null,"abstract":"","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135948473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/00401706.2023.2262897
Stan Lipovetsky
{"title":"Computer Age Statistical Inference: Algorithms, Evidence, and Data Science, Student ed.Bradley Efron and Trevor Hastie, UK: Cambridge University Press, 2021, xix + 491 pp., $ 39.99 (pbk), ISBN 978-1-108-82341-8.","authors":"Stan Lipovetsky","doi":"10.1080/00401706.2023.2262897","DOIUrl":"https://doi.org/10.1080/00401706.2023.2262897","url":null,"abstract":"","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135948231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/00401706.2023.2262895
Enrique Garcia-Ceja
{"title":"A Criminologist’s Guide to R: Crime by the NumbersJacob Kaplan, Boca Raton, FL: Chapman and Hall/CRC Press, Taylor & Francis Group, 2022, 432 pp., ISBN 9781032244075.","authors":"Enrique Garcia-Ceja","doi":"10.1080/00401706.2023.2262895","DOIUrl":"https://doi.org/10.1080/00401706.2023.2262895","url":null,"abstract":"","PeriodicalId":22208,"journal":{"name":"Technometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135948236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}