Pub Date : 2023-06-21DOI: 10.1080/00224065.2023.2210320
Yan He, Yicheng Kang, F. Tsung, D. Xiang
Abstract Modern manufacturing systems are often installed with sensor networks which generate high-dimensional data at high velocity. These data streams offer valuable information about the industrial system’s real-time performance. If a shift occurs in the manufacturing process, fault diagnosis based on the data streams becomes a fundamental task as it identifies the affected data streams and provides insights into the root cause. Existing fault diagnostic methods either ignore the correlation between different streams or fail to determine the shift directions. In this paper, we propose a directional fault classification procedure that incorporates the between-stream correlations. We suggest a three-state hidden Markov model that captures the correlation structure and enables inference about the shift direction. We show that our procedure is optimal in the sense that it minimizes the expected number of false discoveries while controlling the proportion of missed signals at a desired level. We also propose a deconvolution-expectation-maximization (DEM) algorithm for estimating the model parameters and establish the asymptotic optimality for the data-driven version of our procedure. Numerical comparisons with an existing approach and an application to a semiconductor production study show that the proposed procedure works well in practice.
{"title":"Directional fault classification for correlated High-Dimensional data streams using hidden Markov models","authors":"Yan He, Yicheng Kang, F. Tsung, D. Xiang","doi":"10.1080/00224065.2023.2210320","DOIUrl":"https://doi.org/10.1080/00224065.2023.2210320","url":null,"abstract":"Abstract Modern manufacturing systems are often installed with sensor networks which generate high-dimensional data at high velocity. These data streams offer valuable information about the industrial system’s real-time performance. If a shift occurs in the manufacturing process, fault diagnosis based on the data streams becomes a fundamental task as it identifies the affected data streams and provides insights into the root cause. Existing fault diagnostic methods either ignore the correlation between different streams or fail to determine the shift directions. In this paper, we propose a directional fault classification procedure that incorporates the between-stream correlations. We suggest a three-state hidden Markov model that captures the correlation structure and enables inference about the shift direction. We show that our procedure is optimal in the sense that it minimizes the expected number of false discoveries while controlling the proportion of missed signals at a desired level. We also propose a deconvolution-expectation-maximization (DEM) algorithm for estimating the model parameters and establish the asymptotic optimality for the data-driven version of our procedure. Numerical comparisons with an existing approach and an application to a semiconductor production study show that the proposed procedure works well in practice.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85049001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-21DOI: 10.1080/00224065.2023.2217363
A. V. Quevedo, G. Vining
Abstract In parametric non-linear profile modeling, it is crucial to map the impact of model parameters to a single metric. According to the profile monitoring literature, using multivariate T statistic to monitor the stability of the parameters simultaneously is a common approach. However, this approach only focuses on the estimated parameters of the non-linear model and treats them as separate but correlated quality characteristics of the process. Consequently, they do not take full advantage of the model structure. To address this limitation, we propose a procedure to monitor profiles based on a non-linear mixed model that considers the proper variance-covariance structure. Our proposed method is based on the concept of externally studentized residuals to test whether a given profile significantly deviates from the other profiles in the non-linear mixed model. The results show that our control chart is effective and appears to perform better than the T chart.
{"title":"A non-linear mixed model approach for detecting outlying profiles","authors":"A. V. Quevedo, G. Vining","doi":"10.1080/00224065.2023.2217363","DOIUrl":"https://doi.org/10.1080/00224065.2023.2217363","url":null,"abstract":"Abstract In parametric non-linear profile modeling, it is crucial to map the impact of model parameters to a single metric. According to the profile monitoring literature, using multivariate T statistic to monitor the stability of the parameters simultaneously is a common approach. However, this approach only focuses on the estimated parameters of the non-linear model and treats them as separate but correlated quality characteristics of the process. Consequently, they do not take full advantage of the model structure. To address this limitation, we propose a procedure to monitor profiles based on a non-linear mixed model that considers the proper variance-covariance structure. Our proposed method is based on the concept of externally studentized residuals to test whether a given profile significantly deviates from the other profiles in the non-linear mixed model. The results show that our control chart is effective and appears to perform better than the T chart.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75546634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-27DOI: 10.1080/00224065.2022.2097967
Hao Zhao
R is one of the most common programming languages used for various statistical modeling and data analysis tasks. Learning Base R (2nd edition) by Lawrence M. Leemis provides an accessible approach to R language for beginners with little to no programming exposure. Unlike other introductory R language books, Learning Base R is more in-depth from a statistical perspective, giving a fundamental overview of R language. In this book, Chapter 1 provides a comprehensive introduction to R language, as well as some tricks to make the R session more efficient. The following two chapters introduce basic arithmetic operations. Then, three elementary data structures, vector, matrices, and arrays, are introduced in Chapters 4 to 6. Chapters 7 and 8 describe built-in and user-written functions, and Chapter 9 introduces some useful utilities. Notably, some new functions have been added to these chapters in this new edition, such as assign, append, and attributes. The next three chapters introduce three other types of elements that can be stored in data structures, complex numbers, character strings, and logical elements. Chapters 13 and 14 introduce the methods for comparing elements with relational operators and coercing elements to specific data types. In addition, a new table summarizing the concept of “is family” of functions is provided in this chapter. Two more advanced data structures, lists and data frames, are introduced in the next two chapters. Chapter 17 shows some built-in data sets in R. Chapter 18 concerns input/output; a more sophisticated application of scan is also introduced here. After introducing these essential topics, some advanced topics are illustrated in the following chapters. Chapter 19 focuses on some suitable functions associated with the probability distributions of random variables. Chapters 20 and 21 give, in very fine detail, how to generate high-level graphics and custom graphics, and Chapters 22 to 24 introduce many of R’s programming capabilities. Chapter 25 explains the topic of the Monto Carlo simulation. Furthermore, Chapters 26 to 28 have the most modification comparing to the previous version. Some brief introductions to statistical inference methods are given in Chapter 26, which includes univariate data analysis, analysis of variance, regression, and time series analysis. Chapter 27 introduces linear algebra functions. Chapter 28 covers some popular packages for data visualization and data analysis, such as ggplot2, lubridate, lpsolve, and other packages in the exercises section. There are over 400 exercises in total, an increase of 265 new exercises (an average of 9–10 new exercises per chapter) from the previous edition, to enhance the reader’s knowledge of R. The book also includes plenty of instructional videos and code for readers to explore, which are available on the author’s website. In conclusion, this book covers the R programming language and all its details in a practical way. For those who are just startin
R是用于各种统计建模和数据分析任务的最常用的编程语言之一。Lawrence M. Leemis的《Learning Base R(第二版)》为几乎没有编程经验的初学者提供了一种易于访问的R语言方法。与其他介绍性R语言书籍不同,《学习基础R》从统计角度更深入,给出了R语言的基本概述。在这本书中,第1章提供了对R语言的全面介绍,以及一些使R会话更有效的技巧。下面两章介绍基本的算术运算。然后,三种基本的数据结构,向量,矩阵和数组,介绍了在第4章至第6章。第7章和第8章描述了内置和用户编写的函数,第9章介绍了一些有用的实用程序。值得注意的是,在这个新版本中,这些章节中添加了一些新函数,例如赋值、追加和属性。接下来的三章将介绍另外三种可以存储在数据结构中的元素:复数、字符串和逻辑元素。第13章和第14章介绍了与关系操作符比较元素和将元素强制为特定数据类型的方法。此外,本章还提供了一个新的表,概述了“is族”函数的概念。两种更高级的数据结构,列表和数据框架,将在接下来的两章中介绍。第17章展示了r语言中的一些内置数据集。第18章关注输入/输出;这里还介绍了扫描的一个更复杂的应用。在介绍了这些基本主题之后,一些高级主题将在以下章节中进行说明。第19章主要讨论与随机变量的概率分布有关的一些合适的函数。第20章和第21章非常详细地介绍了如何生成高级图形和自定义图形,第22章到第24章介绍了R的许多编程功能。第25章解释了蒙特卡罗模拟的主题。此外,第26至28章与前一版本相比修改最多。第26章简要介绍了统计推断方法,包括单变量数据分析、方差分析、回归分析和时间序列分析。第27章介绍线性代数函数。第28章介绍了一些用于数据可视化和数据分析的流行软件包,如ggplot2、lubrication、lpsolve和练习部分中的其他软件包。本书共有400多个习题,比上一版增加了265个新习题(平均每章9-10个新习题),以提高读者对r的认识。本书还包括大量的教学视频和代码,供读者探索,这些都可以在作者的网站上找到。总之,这本书以实用的方式涵盖了R编程语言及其所有细节。对于那些刚刚开始学习R的人来说,这本书以及相关的示例和练习将给你足够的知识,让你开始使用R进行各种数据分析任务。还有一些额外的提示和技巧可以帮助您编写干净和优化的代码。如果你热衷于从头开始学习R,或者只是想复习一下,我强烈推荐这本书。
{"title":"Learning Basse R, 2nd edition, Lawrence M. Leemis, 2022, Lightning Source, 368 pp., $40, ISBN: 978-0-9829174-5-9","authors":"Hao Zhao","doi":"10.1080/00224065.2022.2097967","DOIUrl":"https://doi.org/10.1080/00224065.2022.2097967","url":null,"abstract":"R is one of the most common programming languages used for various statistical modeling and data analysis tasks. Learning Base R (2nd edition) by Lawrence M. Leemis provides an accessible approach to R language for beginners with little to no programming exposure. Unlike other introductory R language books, Learning Base R is more in-depth from a statistical perspective, giving a fundamental overview of R language. In this book, Chapter 1 provides a comprehensive introduction to R language, as well as some tricks to make the R session more efficient. The following two chapters introduce basic arithmetic operations. Then, three elementary data structures, vector, matrices, and arrays, are introduced in Chapters 4 to 6. Chapters 7 and 8 describe built-in and user-written functions, and Chapter 9 introduces some useful utilities. Notably, some new functions have been added to these chapters in this new edition, such as assign, append, and attributes. The next three chapters introduce three other types of elements that can be stored in data structures, complex numbers, character strings, and logical elements. Chapters 13 and 14 introduce the methods for comparing elements with relational operators and coercing elements to specific data types. In addition, a new table summarizing the concept of “is family” of functions is provided in this chapter. Two more advanced data structures, lists and data frames, are introduced in the next two chapters. Chapter 17 shows some built-in data sets in R. Chapter 18 concerns input/output; a more sophisticated application of scan is also introduced here. After introducing these essential topics, some advanced topics are illustrated in the following chapters. Chapter 19 focuses on some suitable functions associated with the probability distributions of random variables. Chapters 20 and 21 give, in very fine detail, how to generate high-level graphics and custom graphics, and Chapters 22 to 24 introduce many of R’s programming capabilities. Chapter 25 explains the topic of the Monto Carlo simulation. Furthermore, Chapters 26 to 28 have the most modification comparing to the previous version. Some brief introductions to statistical inference methods are given in Chapter 26, which includes univariate data analysis, analysis of variance, regression, and time series analysis. Chapter 27 introduces linear algebra functions. Chapter 28 covers some popular packages for data visualization and data analysis, such as ggplot2, lubridate, lpsolve, and other packages in the exercises section. There are over 400 exercises in total, an increase of 265 new exercises (an average of 9–10 new exercises per chapter) from the previous edition, to enhance the reader’s knowledge of R. The book also includes plenty of instructional videos and code for readers to explore, which are available on the author’s website. In conclusion, this book covers the R programming language and all its details in a practical way. For those who are just startin","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85824131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-08DOI: 10.1080/00224065.2023.2196455
B. Jones, R. Lekivetz, C. Nachtsheim
Abstract There is limited literature on screening when some factors are at three levels and others are at two levels. This topic has seen renewed interest of late following the introduction of the definitive screening design structure by Jones and Nachtsheim 2011 and Xiao et al. 2012. Two well-known examples are Taguchi’s L 18 and L 36 designs. However, these designs are limited in two ways. First, they only allow for either 18 or 36 runs, which is restrictive. Second, they provide no protection against bias of the main effects due to active two-factor interactions. In this article, we introduce a family of orthogonal, mixed-level screening designs in multiples of eight runs. Our 16-run design can accommodate up to four continuous three-level factors and up to eight two-level factors. The three-level factors must be continuous, whereas the two-level factors can be either continuous or categorical. All of our designs supply substantial bias protection of the main effects estimates due to active two-factor interactions.
{"title":"A family of orthogonal main effects screening designs for mixed-level factors","authors":"B. Jones, R. Lekivetz, C. Nachtsheim","doi":"10.1080/00224065.2023.2196455","DOIUrl":"https://doi.org/10.1080/00224065.2023.2196455","url":null,"abstract":"Abstract There is limited literature on screening when some factors are at three levels and others are at two levels. This topic has seen renewed interest of late following the introduction of the definitive screening design structure by Jones and Nachtsheim 2011 and Xiao et al. 2012. Two well-known examples are Taguchi’s L 18 and L 36 designs. However, these designs are limited in two ways. First, they only allow for either 18 or 36 runs, which is restrictive. Second, they provide no protection against bias of the main effects due to active two-factor interactions. In this article, we introduce a family of orthogonal, mixed-level screening designs in multiples of eight runs. Our 16-run design can accommodate up to four continuous three-level factors and up to eight two-level factors. The three-level factors must be continuous, whereas the two-level factors can be either continuous or categorical. All of our designs supply substantial bias protection of the main effects estimates due to active two-factor interactions.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79487668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-03DOI: 10.1080/00224065.2023.2196035
José Núñez Ares, P. Goos
Abstract The family of orthogonal minimally aliased response surface or OMARS designs comprises traditional response surface designs, such as central composite designs and Box-Behnken designs, as well as definitive screening designs. Key features of OMARS designs are the facts that they are orthogonal for the main effects and that the main effects are not at all aliased with any two-factor interaction effect or with any quadratic effect. In this article, we present a method to arrange the runs of an OMARS design in blocks of equal size, so that the main effects can be estimated independently from the blocks, and the interaction effects and the quadratic effects are confounded as little as possible with the blocks. We show that our new method for blocking OMARS designs offers much flexibility when it comes to choosing the number of runs, the number of blocks and the block sizes, and that it often outperforms the blocking arrangements of definitive screening designs available in the literature and in commercial software.
{"title":"Blocking OMARS designs and definitive screening designs","authors":"José Núñez Ares, P. Goos","doi":"10.1080/00224065.2023.2196035","DOIUrl":"https://doi.org/10.1080/00224065.2023.2196035","url":null,"abstract":"Abstract The family of orthogonal minimally aliased response surface or OMARS designs comprises traditional response surface designs, such as central composite designs and Box-Behnken designs, as well as definitive screening designs. Key features of OMARS designs are the facts that they are orthogonal for the main effects and that the main effects are not at all aliased with any two-factor interaction effect or with any quadratic effect. In this article, we present a method to arrange the runs of an OMARS design in blocks of equal size, so that the main effects can be estimated independently from the blocks, and the interaction effects and the quadratic effects are confounded as little as possible with the blocks. We show that our new method for blocking OMARS designs offers much flexibility when it comes to choosing the number of runs, the number of blocks and the block sizes, and that it often outperforms the blocking arrangements of definitive screening designs available in the literature and in commercial software.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82587607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-18DOI: 10.1080/00224065.2023.2192883
Lenny Rahmawati
{"title":"Statistics for Chemical and Process Engineers: A Modern Approach","authors":"Lenny Rahmawati","doi":"10.1080/00224065.2023.2192883","DOIUrl":"https://doi.org/10.1080/00224065.2023.2192883","url":null,"abstract":"","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82130638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-13DOI: 10.1080/00224065.2023.2192884
Willis A. Jensen
{"title":"The Reliability of Generating Data","authors":"Willis A. Jensen","doi":"10.1080/00224065.2023.2192884","DOIUrl":"https://doi.org/10.1080/00224065.2023.2192884","url":null,"abstract":"","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76024237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.1080/00224065.2023.2192882
L. Leemis
{"title":"Foundations of Statistics for Data Scientists: With R and Python","authors":"L. Leemis","doi":"10.1080/00224065.2023.2192882","DOIUrl":"https://doi.org/10.1080/00224065.2023.2192882","url":null,"abstract":"","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78814503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.1080/00224065.2023.2185558
E. V. Thomas
Abstract Sensitivity testing often involves sequential design strategies in small-sample settings that provide binary data which are then used to develop generalized linear models. Model parameters are usually estimated via maximum likelihood methods. Often, confidence bounds relating to model parameters and quantiles are based on the likelihood ratio. In this paper, it is demonstrated how the bias-corrected parametric bootstrap used in conjunction with approximate pivotal quantities can be used to provide an alternative means for constructing bounds when using a location-scale model. In small-sample settings, the coverage of bounds based on the likelihood ratio is often anticonservative due to bias in estimating the scale parameter. In contrast, bounds produced by the bias-corrected parametric bootstrap can provide accurate levels of coverage in such settings when both the sequential strategy and method for parameter estimation effectively adapt (are approximately equivariant) to the location and scale. A series of simulations illustrate this contrasting behavior in a small-sample setting when assuming a normal/probit model in conjunction with a popular sequential design strategy. In addition, it is shown how a high-fidelity assessment of performance can be attained with reduced computational effort by using the nonparametric bootstrap to resample pivotal quantities obtained from a small-scale set of parametric bootstrap simulations.
{"title":"Use of the bias-corrected parametric bootstrap in sensitivity testing/analysis to construct confidence bounds with accurate levels of coverage","authors":"E. V. Thomas","doi":"10.1080/00224065.2023.2185558","DOIUrl":"https://doi.org/10.1080/00224065.2023.2185558","url":null,"abstract":"Abstract Sensitivity testing often involves sequential design strategies in small-sample settings that provide binary data which are then used to develop generalized linear models. Model parameters are usually estimated via maximum likelihood methods. Often, confidence bounds relating to model parameters and quantiles are based on the likelihood ratio. In this paper, it is demonstrated how the bias-corrected parametric bootstrap used in conjunction with approximate pivotal quantities can be used to provide an alternative means for constructing bounds when using a location-scale model. In small-sample settings, the coverage of bounds based on the likelihood ratio is often anticonservative due to bias in estimating the scale parameter. In contrast, bounds produced by the bias-corrected parametric bootstrap can provide accurate levels of coverage in such settings when both the sequential strategy and method for parameter estimation effectively adapt (are approximately equivariant) to the location and scale. A series of simulations illustrate this contrasting behavior in a small-sample setting when assuming a normal/probit model in conjunction with a popular sequential design strategy. In addition, it is shown how a high-fidelity assessment of performance can be attained with reduced computational effort by using the nonparametric bootstrap to resample pivotal quantities obtained from a small-scale set of parametric bootstrap simulations.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89887387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.1080/00224065.2023.2186288
Chun-Yen Wang, Dennis K. J. Lin
Abstract In an order-of-addition (OofA) experiment, the response is a function of the addition order of components. The key objective of the OofA experiments is to find the optimal order of addition. The most popularly used model for OofA experiments is perhaps the pairwise ordering (PWO) model, which assumes that the response can be fully accounted by the pairwise ordering of components. Recently, the PWO model has been extended by adding the interactions of PWO factors, to account for variations caused by the ordering of sets of three or more components, where the interaction term is defined by the multiplication of two PWO factors. This paper introduces a novel class of conditional PWO effect to study the interaction effect between PWO factors. The advantages of the proposed interaction terms are studied. Based on these conditional effects, a new model is proposed. The optimal order of addition can be straightforwardly obtained via the proposed model.
{"title":"Interaction effects in pairwise ordering model","authors":"Chun-Yen Wang, Dennis K. J. Lin","doi":"10.1080/00224065.2023.2186288","DOIUrl":"https://doi.org/10.1080/00224065.2023.2186288","url":null,"abstract":"Abstract In an order-of-addition (OofA) experiment, the response is a function of the addition order of components. The key objective of the OofA experiments is to find the optimal order of addition. The most popularly used model for OofA experiments is perhaps the pairwise ordering (PWO) model, which assumes that the response can be fully accounted by the pairwise ordering of components. Recently, the PWO model has been extended by adding the interactions of PWO factors, to account for variations caused by the ordering of sets of three or more components, where the interaction term is defined by the multiplication of two PWO factors. This paper introduces a novel class of conditional PWO effect to study the interaction effect between PWO factors. The advantages of the proposed interaction terms are studied. Based on these conditional effects, a new model is proposed. The optimal order of addition can be straightforwardly obtained via the proposed model.","PeriodicalId":54769,"journal":{"name":"Journal of Quality Technology","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79357274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}