Pub Date : 2021-04-20DOI: 10.1080/00224065.2021.1903822
Hao Yan, Nurrettin Dorukhan Sergin, William A. Brenneman, Steve J. Lange, Shan Ba
Abstract In multistage manufacturing systems, modeling multiple quality indices based on the process sensing variables is important. However, the classic modeling technique predicts each quality variable one at a time, which fails to consider the correlation within or between stages. We propose a deep multistage multi-task learning framework to jointly predict all output sensing variables in a unified end-to-end learning framework according to the sequential system architecture in the MMS. Our numerical studies and real case study have shown that the new model has a superior performance compared to many benchmark methods as well as great interpretability through developed variable selection techniques.
{"title":"Deep multistage multi-task learning for quality prediction of multistage manufacturing systems","authors":"Hao Yan, Nurrettin Dorukhan Sergin, William A. Brenneman, Steve J. Lange, Shan Ba","doi":"10.1080/00224065.2021.1903822","DOIUrl":"https://doi.org/10.1080/00224065.2021.1903822","url":null,"abstract":"Abstract In multistage manufacturing systems, modeling multiple quality indices based on the process sensing variables is important. However, the classic modeling technique predicts each quality variable one at a time, which fails to consider the correlation within or between stages. We propose a deep multistage multi-task learning framework to jointly predict all output sensing variables in a unified end-to-end learning framework according to the sequential system architecture in the MMS. Our numerical studies and real case study have shown that the new model has a superior performance compared to many benchmark methods as well as great interpretability through developed variable selection techniques.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"1 1","pages":"526 - 544"},"PeriodicalIF":2.5,"publicationDate":"2021-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79996947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-14DOI: 10.1080/00224065.2021.1903823
R. Goedhart, W. Woodall
Abstract We propose a method for monitoring proportions when the in-control proportion and the sample sizes vary over time. Our approach is able to overcome some of the performance issues of other commonly used methods, as we demonstrate in this paper using analytical and numerical methods. The derivations and results are shown mainly for monitoring proportions, but we show how the method can be extended to the monitoring of count data.
{"title":"Monitoring proportions with two components of common cause variation","authors":"R. Goedhart, W. Woodall","doi":"10.1080/00224065.2021.1903823","DOIUrl":"https://doi.org/10.1080/00224065.2021.1903823","url":null,"abstract":"Abstract We propose a method for monitoring proportions when the in-control proportion and the sample sizes vary over time. Our approach is able to overcome some of the performance issues of other commonly used methods, as we demonstrate in this paper using analytical and numerical methods. The derivations and results are shown mainly for monitoring proportions, but we show how the method can be extended to the monitoring of count data.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"23 1","pages":"324 - 337"},"PeriodicalIF":2.5,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82059945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1080/00224065.2021.1903820
Miaomiao Yu, Chunjie Wu, F. Tsung
Abstract Dynamic data detection is one of the main concerns in the statistical process control (SPC) field. Here we focus on monitoring parametric multivariate dynamic data streams using the ARMAX-GARCH model, which reflects both the influence of exogenous variables on the mean vector and the heterogeneity of the covariance matrix. A quasi maximum likelihood estimator is used to estimate the parameter vector of a dynamic process, and a top-r control scheme is proposed to monitor the parameters of multi-dimensional data streams. Finally, a real-data example of monitoring landslide illustrates the superiorities of the proposed scheme.
{"title":"Change detection in parametric multivariate dynamic data streams using the ARMAX-GARCH model","authors":"Miaomiao Yu, Chunjie Wu, F. Tsung","doi":"10.1080/00224065.2021.1903820","DOIUrl":"https://doi.org/10.1080/00224065.2021.1903820","url":null,"abstract":"Abstract Dynamic data detection is one of the main concerns in the statistical process control (SPC) field. Here we focus on monitoring parametric multivariate dynamic data streams using the ARMAX-GARCH model, which reflects both the influence of exogenous variables on the mean vector and the heterogeneity of the covariance matrix. A quasi maximum likelihood estimator is used to estimate the parameter vector of a dynamic process, and a top-r control scheme is proposed to monitor the parameters of multi-dimensional data streams. Finally, a real-data example of monitoring landslide illustrates the superiorities of the proposed scheme.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"148 1","pages":"303 - 323"},"PeriodicalIF":2.5,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77777799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1080/00224065.2021.1903824
Guanqi Fang
This book written by Prentice and Zhao brings advances in the specialized field of failure time data analysis. In the existing literature, an extensive study of statistical methods for univariate failure time analysis has been performed. These methods include Kaplan-Meier (KM) estimator, Cox regression, and censored data rank test, etc. However, to my best knowledge, the effort on multivariate failure time data analysis is insufficient. Multivariate failure time data arise when failure times for individuals in a study cohort have a dependent feature, which exists in a number of situations, including epidemiologic studies and clinical trials, etc. The development of statistical methods for multivariate data deserves more research attention. Even though there are several books tackling the problem, they devote the analysis either for select types of multivariate data or have an emphasis on a specific method. Compared with these works, this book makes a summary of the latest innovative research results both deeply and extensively. Overall, the logic of this book is very clear. Chapter 1 gives an overview of the subsequent chapters. It covers a brief introduction to the models and tools and also provides some good application settings. Readers who are not familiar with the topic may read this chapter to grasp the motivation of the study quickly. Chapter 2 describes some core methods that are used to model univariate failure time data. It serves as a solid foundation for the extension to multivariate data analysis; therefore, readers need to pay much attention to this chapter. Chapters 3 and 4 provide tools for analyzing bivariate failure time data from the nonparametric and regression perspectives, respectively. In Chapters 5 and 6, the aforementioned models and tools are extended to cover the scenario of three or more failure time variates. Chapter 7 further considers the case of recurrent event data. Finally, the book concludes with Chapter 8, which discusses approaches to handling more general assumptions, such as dependent censorship and mismeasured covariate data. As implied by the title, the marginal modeling approach is the most important and unique feature of this book. This approach has been described in detail by Sections 4.6, 5.4, and 6.5. Under this approach, a Cox-type model for the marginal double or triple or multiple failure hazard rates is utilized to explain the effects of time-dependent covariates. Some strengths provided by this approach make it distinct from the three conventional approaches: 1) the frailty approach, 2) the copula approach, and 3) the counting process intensity modeling. For example, the copula approach imposes a strong assumption on the dependencies among failure times and doesn’t allow such dependencies to depend on covariates. In contrast, the marginal approach provides robustness by conducting semiparametric estimates of the dependency. In short, the contributions of this book consist of
{"title":"The Statistical Analysis of Multivariate Failure Time Data: A Marginal Modeling Approach","authors":"Guanqi Fang","doi":"10.1080/00224065.2021.1903824","DOIUrl":"https://doi.org/10.1080/00224065.2021.1903824","url":null,"abstract":"This book written by Prentice and Zhao brings advances in the specialized field of failure time data analysis. In the existing literature, an extensive study of statistical methods for univariate failure time analysis has been performed. These methods include Kaplan-Meier (KM) estimator, Cox regression, and censored data rank test, etc. However, to my best knowledge, the effort on multivariate failure time data analysis is insufficient. Multivariate failure time data arise when failure times for individuals in a study cohort have a dependent feature, which exists in a number of situations, including epidemiologic studies and clinical trials, etc. The development of statistical methods for multivariate data deserves more research attention. Even though there are several books tackling the problem, they devote the analysis either for select types of multivariate data or have an emphasis on a specific method. Compared with these works, this book makes a summary of the latest innovative research results both deeply and extensively. Overall, the logic of this book is very clear. Chapter 1 gives an overview of the subsequent chapters. It covers a brief introduction to the models and tools and also provides some good application settings. Readers who are not familiar with the topic may read this chapter to grasp the motivation of the study quickly. Chapter 2 describes some core methods that are used to model univariate failure time data. It serves as a solid foundation for the extension to multivariate data analysis; therefore, readers need to pay much attention to this chapter. Chapters 3 and 4 provide tools for analyzing bivariate failure time data from the nonparametric and regression perspectives, respectively. In Chapters 5 and 6, the aforementioned models and tools are extended to cover the scenario of three or more failure time variates. Chapter 7 further considers the case of recurrent event data. Finally, the book concludes with Chapter 8, which discusses approaches to handling more general assumptions, such as dependent censorship and mismeasured covariate data. As implied by the title, the marginal modeling approach is the most important and unique feature of this book. This approach has been described in detail by Sections 4.6, 5.4, and 6.5. Under this approach, a Cox-type model for the marginal double or triple or multiple failure hazard rates is utilized to explain the effects of time-dependent covariates. Some strengths provided by this approach make it distinct from the three conventional approaches: 1) the frailty approach, 2) the copula approach, and 3) the counting process intensity modeling. For example, the copula approach imposes a strong assumption on the dependencies among failure times and doesn’t allow such dependencies to depend on covariates. In contrast, the marginal approach provides robustness by conducting semiparametric estimates of the dependency. In short, the contributions of this book consist of","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"8 1","pages":"359 - 360"},"PeriodicalIF":2.5,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73872207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1080/00224065.2021.1902187
M. Testik, B. Colosimo
{"title":"Next Editor of the Journal of Quality Technology: Dr. L. Allison Jones-Farmer","authors":"M. Testik, B. Colosimo","doi":"10.1080/00224065.2021.1902187","DOIUrl":"https://doi.org/10.1080/00224065.2021.1902187","url":null,"abstract":"","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"182 1","pages":"217 - 217"},"PeriodicalIF":2.5,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79772312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-11DOI: 10.1080/00224065.2021.1889417
P. Goos
Abstract In this article, I provide a detailed discussion of the well-known fish patty experiment introduced in the literature by the late John A. Cornell in the first edition of his famous textbook on the design and analysis of mixture experiments. Cornell used the fish patty experiment as the motivating example for an article discussing that, for logistical reasons, many mixture-process variable experiments are run using a split-plot experimental design. More specifically, he described two possible ways in which the fish patty experiment might have been performed, both of which require a split-plot analysis of the data. These descriptions were not followed by the corresponding analyses of the fish patty data. Moreover, Cornell did not discuss the most convenient way in which the fish patty experiment could have been run, namely using a strip-plot design. In this article, I discuss the logistics leading to a strip-plot design, conduct the corresponding strip-plot analysis and contrast it with the two split-plot analyses.
在本文中,我对已故约翰·a·康奈尔(John a . Cornell)在其著名的混合实验设计与分析教科书第一版中介绍的著名的鱼饼实验进行了详细的讨论。康奈尔在一篇文章中以鱼肉饼实验为例,讨论了出于逻辑原因,许多混合过程变量实验都是使用分块实验设计进行的。更具体地说,他描述了两种可能进行鱼饼实验的方法,这两种方法都需要对数据进行分割图分析。这些描述并没有对鱼肉饼数据进行相应的分析。此外,康奈尔并没有讨论进行鱼肉饼实验最方便的方法,即使用条形图设计。在本文中,我讨论了导致带状地块设计的物流,进行了相应的带状地块分析,并与两种分割地块分析进行了对比。
{"title":"The fish patty experiment: a strip-plot look","authors":"P. Goos","doi":"10.1080/00224065.2021.1889417","DOIUrl":"https://doi.org/10.1080/00224065.2021.1889417","url":null,"abstract":"Abstract In this article, I provide a detailed discussion of the well-known fish patty experiment introduced in the literature by the late John A. Cornell in the first edition of his famous textbook on the design and analysis of mixture experiments. Cornell used the fish patty experiment as the motivating example for an article discussing that, for logistical reasons, many mixture-process variable experiments are run using a split-plot experimental design. More specifically, he described two possible ways in which the fish patty experiment might have been performed, both of which require a split-plot analysis of the data. These descriptions were not followed by the corresponding analyses of the fish patty data. Moreover, Cornell did not discuss the most convenient way in which the fish patty experiment could have been run, namely using a strip-plot design. In this article, I discuss the logistics leading to a strip-plot design, conduct the corresponding strip-plot analysis and contrast it with the two split-plot analyses.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"10 1","pages":"236 - 248"},"PeriodicalIF":2.5,"publicationDate":"2021-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79210090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-05DOI: 10.1080/00224065.2021.1889420
Yifu Li, Xinwei Deng, Shan Ba, W. Myers, William A. Brenneman, Steve J. Lange, Ronald Zink, R. Jin
Abstract A manufacturing system collects big and heterogeneous data for tasks such as product quality modeling and data-driven decision-making. However, as the size of data grows, timely and effective data utilization becomes challenging. We propose an unsupervised data filtering method to reduce manufacturing big data sets with multi-variate continuous variables into informative small data sets. Furthermore, to determine the appropriate proportion of data to be filtered, we propose a filtering information criterion (FIC) to balance the tradeoff between the filtered data size and the information preserved. The case study of a babycare manufacturing and a simulation study have shown the effectiveness of the proposed method.
{"title":"Cluster-based data filtering for manufacturing big data systems","authors":"Yifu Li, Xinwei Deng, Shan Ba, W. Myers, William A. Brenneman, Steve J. Lange, Ronald Zink, R. Jin","doi":"10.1080/00224065.2021.1889420","DOIUrl":"https://doi.org/10.1080/00224065.2021.1889420","url":null,"abstract":"Abstract A manufacturing system collects big and heterogeneous data for tasks such as product quality modeling and data-driven decision-making. However, as the size of data grows, timely and effective data utilization becomes challenging. We propose an unsupervised data filtering method to reduce manufacturing big data sets with multi-variate continuous variables into informative small data sets. Furthermore, to determine the appropriate proportion of data to be filtered, we propose a filtering information criterion (FIC) to balance the tradeoff between the filtered data size and the information preserved. The case study of a babycare manufacturing and a simulation study have shown the effectiveness of the proposed method.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"152 1","pages":"290 - 302"},"PeriodicalIF":2.5,"publicationDate":"2021-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79585482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-04DOI: 10.1080/00224065.2021.1889418
L. Deldossi, C. Tommasi
Abstract Big Data are huge amounts of digital information that rarely result from properly planned surveys; as a consequence they often contain redundant observations. When the aim is to answer particular questions of interest, we suggest selecting a subsample of units that contains the majority of the information to achieve this goal. Selection methods driven by the theory of optimal design incorporate the inferential purposes and thus perform better than standard sampling schemes.
{"title":"Optimal design subsampling from Big Datasets","authors":"L. Deldossi, C. Tommasi","doi":"10.1080/00224065.2021.1889418","DOIUrl":"https://doi.org/10.1080/00224065.2021.1889418","url":null,"abstract":"Abstract Big Data are huge amounts of digital information that rarely result from properly planned surveys; as a consequence they often contain redundant observations. When the aim is to answer particular questions of interest, we suggest selecting a subsample of units that contains the majority of the information to achieve this goal. Selection methods driven by the theory of optimal design incorporate the inferential purposes and thus perform better than standard sampling schemes.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"45 1","pages":"93 - 101"},"PeriodicalIF":2.5,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77437589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-28DOI: 10.1080/00224065.2020.1865853
Chang-Yun Lin
Abstract In experimental designs, it is usually assumed that the data follow normal distributions and the models have linear structures. In practice, experimenters may encounter different types of responses and be uncertain about model structures. If this is the case, traditional methods, such as the ANOVA and regression, are not suitable for data analysis and model selection. We introduce the random forest analysis, which is a powerful machine learning method capable of analyzing numerical and categorical data with complicated model structures. To perform model selection and factor identification with the random forest method, we propose a forward stepwise algorithm and develop Python and R codes based on minimizing the OOB error. Six examples including simulation and case studies are provided. We compare the performance of the proposed method and some frequently used analysis methods. Results show that the forward stepwise random forest analysis, in general, has a high power for identifying active factors and selects models that have high prediction accuracy.
{"title":"Forward stepwise random forest analysis for experimental designs","authors":"Chang-Yun Lin","doi":"10.1080/00224065.2020.1865853","DOIUrl":"https://doi.org/10.1080/00224065.2020.1865853","url":null,"abstract":"Abstract In experimental designs, it is usually assumed that the data follow normal distributions and the models have linear structures. In practice, experimenters may encounter different types of responses and be uncertain about model structures. If this is the case, traditional methods, such as the ANOVA and regression, are not suitable for data analysis and model selection. We introduce the random forest analysis, which is a powerful machine learning method capable of analyzing numerical and categorical data with complicated model structures. To perform model selection and factor identification with the random forest method, we propose a forward stepwise algorithm and develop Python and R codes based on minimizing the OOB error. Six examples including simulation and case studies are provided. We compare the performance of the proposed method and some frequently used analysis methods. Results show that the forward stepwise random forest analysis, in general, has a high power for identifying active factors and selects models that have high prediction accuracy.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"22 1","pages":"488 - 504"},"PeriodicalIF":2.5,"publicationDate":"2021-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84776529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-22DOI: 10.1080/00224065.2020.1851618
Di Wang, Kaibo Liu, Xi Zhang
Abstract Thermal fields exist widely in engineering systems and are critical for engineering operation, product quality and system safety in many industries. An accurate prediction of thermal field distribution, that is, acquiring any location of interest in a thermal field at the present and future time, is essential to provide useful information for the surveillance, maintenance, and improvement of a system. However, thermal field prediction using data acquired from sensor networks is challenging due to data sparsity and missing data problems. To address this issue, we propose a field spatiotemporal prediction approach based on transfer learning techniques by studying the dynamics of a 3 D thermal field from multiple homogeneous fields. Our model characterizes the spatiotemporal dynamics of the local thermal field variations by considering the spatiotemporal correlation of the fields and harnessing the information from homogeneous fields to acquire an accurate thermal field distribution in the future. A real case study of thermal fields during grain storage is conducted to validate our proposed approach. Grain thermal field prediction results provide a deep insight of grain quality during storage, which is helpful for the manager of grain storage to make further decisions about grain quality control and maintenance.
{"title":"A spatiotemporal prediction approach for a 3D thermal field from sensor networks","authors":"Di Wang, Kaibo Liu, Xi Zhang","doi":"10.1080/00224065.2020.1851618","DOIUrl":"https://doi.org/10.1080/00224065.2020.1851618","url":null,"abstract":"Abstract Thermal fields exist widely in engineering systems and are critical for engineering operation, product quality and system safety in many industries. An accurate prediction of thermal field distribution, that is, acquiring any location of interest in a thermal field at the present and future time, is essential to provide useful information for the surveillance, maintenance, and improvement of a system. However, thermal field prediction using data acquired from sensor networks is challenging due to data sparsity and missing data problems. To address this issue, we propose a field spatiotemporal prediction approach based on transfer learning techniques by studying the dynamics of a 3 D thermal field from multiple homogeneous fields. Our model characterizes the spatiotemporal dynamics of the local thermal field variations by considering the spatiotemporal correlation of the fields and harnessing the information from homogeneous fields to acquire an accurate thermal field distribution in the future. A real case study of thermal fields during grain storage is conducted to validate our proposed approach. Grain thermal field prediction results provide a deep insight of grain quality during storage, which is helpful for the manager of grain storage to make further decisions about grain quality control and maintenance.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":"1 1","pages":"215 - 235"},"PeriodicalIF":2.5,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88334330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}