Pub Date : 2016-04-06DOI: 10.1080/0740817X.2016.1163443
C. Alexopoulos, D. Goldsman, Peng Tang, James R. Wilson
ABSTRACT This article presents SPSTS, an automated sequential procedure for computing point and Confidence-Interval (CI) estimators for the steady-state mean of a simulation-generated process subject to user-specified requirements for the CI coverage probability and relative half-length. SPSTS is the first sequential method based on Standardized Time Series (STS) area estimators of the steady-state variance parameter (i.e., the sum of covariances at all lags). Whereas its leading competitors rely on the method of batch means to remove bias due to the initial transient, estimate the variance parameter, and compute the CI, SPSTS relies on the signed areas corresponding to two orthonormal STS area variance estimators for these tasks. In successive stages of SPSTS, standard tests for normality and independence are applied to the signed areas to determine (i) the length of the warm-up period, and (ii) a batch size sufficient to ensure adequate convergence of the associated STS area variance estimators to their limiting chi-squared distributions. SPSTS's performance is compared experimentally with that of recent batch-means methods using selected test problems of varying degrees of difficulty. SPSTS performed comparatively well in terms of its average required sample size as well as the coverage and average half-length of the final CIs.
{"title":"SPSTS: A sequential procedure for estimating the steady-state mean using standardized time series","authors":"C. Alexopoulos, D. Goldsman, Peng Tang, James R. Wilson","doi":"10.1080/0740817X.2016.1163443","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1163443","url":null,"abstract":"ABSTRACT This article presents SPSTS, an automated sequential procedure for computing point and Confidence-Interval (CI) estimators for the steady-state mean of a simulation-generated process subject to user-specified requirements for the CI coverage probability and relative half-length. SPSTS is the first sequential method based on Standardized Time Series (STS) area estimators of the steady-state variance parameter (i.e., the sum of covariances at all lags). Whereas its leading competitors rely on the method of batch means to remove bias due to the initial transient, estimate the variance parameter, and compute the CI, SPSTS relies on the signed areas corresponding to two orthonormal STS area variance estimators for these tasks. In successive stages of SPSTS, standard tests for normality and independence are applied to the signed areas to determine (i) the length of the warm-up period, and (ii) a batch size sufficient to ensure adequate convergence of the associated STS area variance estimators to their limiting chi-squared distributions. SPSTS's performance is compared experimentally with that of recent batch-means methods using selected test problems of varying degrees of difficulty. SPSTS performed comparatively well in terms of its average required sample size as well as the coverage and average half-length of the final CIs.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"864 - 880"},"PeriodicalIF":0.0,"publicationDate":"2016-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1163443","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-02DOI: 10.1080/0740817X.2015.1122253
Erik T. S. Bjarnason, S. Taghipour
ABSTRACT We investigate the maintenance and inventory policy for a k-out-of-n system where the components' failures are hidden and follow a non-homogeneous Poisson process. Two types of inspections are performed to find failed components: planned periodic inspections and unplanned opportunistic inspections. The latter are performed at system failure times when n − k +1 components are simultaneously down. In all cases, the failed components are either minimally repaired or replaced with spare parts from the inventory. The inventory is replenished either periodically or when the system fails. The periodic orders have a random lead-time, but there is no lead-time for emergency orders, as these are placed at system failure times. The key objective is to develop a method to solve the joint maintenance and inventory problem for systems with a large number of components, long planning horizon, and large inventory. We construct a simulation model to jointly optimize the periodic inspection interval, the periodic reorder interval, and periodic and emergency order-up-to levels. Due to the large search space, it is infeasible to try all possible combinations of decision variables in a reasonable amount of time. Thus, the simulation model is integrated with a heuristic search algorithm to obtain the optimal solution.
{"title":"Periodic inspection frequency and inventory policies for a k-out-of-n system","authors":"Erik T. S. Bjarnason, S. Taghipour","doi":"10.1080/0740817X.2015.1122253","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1122253","url":null,"abstract":"ABSTRACT We investigate the maintenance and inventory policy for a k-out-of-n system where the components' failures are hidden and follow a non-homogeneous Poisson process. Two types of inspections are performed to find failed components: planned periodic inspections and unplanned opportunistic inspections. The latter are performed at system failure times when n − k +1 components are simultaneously down. In all cases, the failed components are either minimally repaired or replaced with spare parts from the inventory. The inventory is replenished either periodically or when the system fails. The periodic orders have a random lead-time, but there is no lead-time for emergency orders, as these are placed at system failure times. The key objective is to develop a method to solve the joint maintenance and inventory problem for systems with a large number of components, long planning horizon, and large inventory. We construct a simulation model to jointly optimize the periodic inspection interval, the periodic reorder interval, and periodic and emergency order-up-to levels. Due to the large search space, it is infeasible to try all possible combinations of decision variables in a reasonable amount of time. Thus, the simulation model is integrated with a heuristic search algorithm to obtain the optimal solution.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"638 - 650"},"PeriodicalIF":0.0,"publicationDate":"2016-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1122253","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-30DOI: 10.1080/0740817X.2015.1122254
K. Bastani, Prahalada K. Rao, Z. Kong
ABSTRACT The objective of this work is to realize real-time monitoring of process conditions in advanced manufacturing using multiple heterogeneous sensor signals. To achieve this objective we propose an approach invoking the concept of sparse estimation called online sparse estimation-based classification (OSEC). The novelty of the OSEC approach is in representing data from sensor signals as an underdetermined linear system of equations and subsequently solving the underdetermined linear system using a newly developed greedy Bayesian estimation method. We apply the OSEC approach to two advanced manufacturing scenarios, namely, a fused filament fabrication additive manufacturing process and an ultraprecision semiconductor chemical–mechanical planarization process. Using the proposed OSEC approach, process drifts are detected and classified with higher accuracy compared with popular machine learning techniques. Process drifts were detected and classified with a fidelity approaching 90% (F-score) using OSEC. In comparison, conventional signal analysis techniques—e.g., neural networks, support vector machines, quadratic discriminant analysis, naïve Bayes—were evaluated with F-score in the range of 40% to 70%.
{"title":"An online sparse estimation-based classification approach for real-time monitoring in advanced manufacturing processes from heterogeneous sensor data","authors":"K. Bastani, Prahalada K. Rao, Z. Kong","doi":"10.1080/0740817X.2015.1122254","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1122254","url":null,"abstract":"ABSTRACT The objective of this work is to realize real-time monitoring of process conditions in advanced manufacturing using multiple heterogeneous sensor signals. To achieve this objective we propose an approach invoking the concept of sparse estimation called online sparse estimation-based classification (OSEC). The novelty of the OSEC approach is in representing data from sensor signals as an underdetermined linear system of equations and subsequently solving the underdetermined linear system using a newly developed greedy Bayesian estimation method. We apply the OSEC approach to two advanced manufacturing scenarios, namely, a fused filament fabrication additive manufacturing process and an ultraprecision semiconductor chemical–mechanical planarization process. Using the proposed OSEC approach, process drifts are detected and classified with higher accuracy compared with popular machine learning techniques. Process drifts were detected and classified with a fidelity approaching 90% (F-score) using OSEC. In comparison, conventional signal analysis techniques—e.g., neural networks, support vector machines, quadratic discriminant analysis, naïve Bayes—were evaluated with F-score in the range of 40% to 70%.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"165 1","pages":"579 - 598"},"PeriodicalIF":0.0,"publicationDate":"2016-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1122254","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-30DOI: 10.1080/0740817X.2016.1167288
Kunlei Lian, Ashlea Bennett Milburn, R. Rardin
ABSTRACT This article presents a multi-objective variant of the Consistent Vehicle Routing Problem (MoConVRP). Instead of modeling consistency considerations such as driver consistency and time consistency as constraints as in the majority of the ConVRP literature, they are included as objectives. Furthermore, instead of formulating a single weighted objective that relies on specifying relative priorities among objectives, an approach to approximate the Pareto frontier is developed. Specifically, an improved version of multi-directional local search (MDLS) is developed. The updated algorithm, IMDLS, makes use of large neighborhood search to find solutions that are improved according to at least one objective to add to the set of nondominated solutions at each iteration. The performance of IMDLS is compared with MDLS and five other multi-objective algorithms on a set of ConVRP test instances from the literature. The computational study validates the competitive performance of IMDLS. Furthermore, results of the computational study suggest that pursuing the best compromise solution among all three objectives may increase travel costs by about 5% while improving driver and time consistency by approximately 60% and over 75% on average, when compared with a compromise solution having lowest overall travel distance. Supplementary materials are available for this article. Go to the publishe's online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc.
{"title":"An improved multi-directional local search algorithm for the multi-objective consistent vehicle routing problem","authors":"Kunlei Lian, Ashlea Bennett Milburn, R. Rardin","doi":"10.1080/0740817X.2016.1167288","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1167288","url":null,"abstract":"ABSTRACT This article presents a multi-objective variant of the Consistent Vehicle Routing Problem (MoConVRP). Instead of modeling consistency considerations such as driver consistency and time consistency as constraints as in the majority of the ConVRP literature, they are included as objectives. Furthermore, instead of formulating a single weighted objective that relies on specifying relative priorities among objectives, an approach to approximate the Pareto frontier is developed. Specifically, an improved version of multi-directional local search (MDLS) is developed. The updated algorithm, IMDLS, makes use of large neighborhood search to find solutions that are improved according to at least one objective to add to the set of nondominated solutions at each iteration. The performance of IMDLS is compared with MDLS and five other multi-objective algorithms on a set of ConVRP test instances from the literature. The computational study validates the competitive performance of IMDLS. Furthermore, results of the computational study suggest that pursuing the best compromise solution among all three objectives may increase travel costs by about 5% while improving driver and time consistency by approximately 60% and over 75% on average, when compared with a compromise solution having lowest overall travel distance. Supplementary materials are available for this article. Go to the publishe's online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"975 - 992"},"PeriodicalIF":0.0,"publicationDate":"2016-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1167288","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-30DOI: 10.1080/0740817X.2015.1110651
T. Bilgiç, Refik Güllü
ABSTRACT We analyze the competitive investment behavior on innovative products or services under revenue and technology uncertainty for heterogenous firms. Firms make a decision on how much to invest in research and development of an innovative technology at the beginning of the time horizon. They discover the technology at an uncertain time in the future. The time of successful discovery depends on the amount of investment and the characteristics of the firms. All firms collect revenues even though they are not winners. Although there can be positive or negative external shocks, the potential revenue rates decrease in time and the first firm to adopt the technology is less prone to negative shocks and benefits more from positive shocks. Therefore, the competition is a stochastic race, where all firms collect some revenue once they adopt. We show the existence of a pure strategy Nash equilibrium for this game in a duopoly market under general assumptions and provide more structural results when the time to successfully innovate is exponentially distributed. We show the uniqueness of the equilibrium for an arbitrary number of symmetric firms. We argue that for sufficiently efficient firms who are resilient against market shocks, consolidating racing firms will decrease their expected profits. We also provide an illustrative computational analysis for comparative statics, where we show the non-monotonic behavior of equilibrium investment levels as examples. It appears that the equilibrium investment level behavior in innovation can be highly dependent on firm characteristics.
{"title":"Innovation race under revenue and technology uncertainty of heterogeneous firms where the winner does not take all","authors":"T. Bilgiç, Refik Güllü","doi":"10.1080/0740817X.2015.1110651","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1110651","url":null,"abstract":"ABSTRACT We analyze the competitive investment behavior on innovative products or services under revenue and technology uncertainty for heterogenous firms. Firms make a decision on how much to invest in research and development of an innovative technology at the beginning of the time horizon. They discover the technology at an uncertain time in the future. The time of successful discovery depends on the amount of investment and the characteristics of the firms. All firms collect revenues even though they are not winners. Although there can be positive or negative external shocks, the potential revenue rates decrease in time and the first firm to adopt the technology is less prone to negative shocks and benefits more from positive shocks. Therefore, the competition is a stochastic race, where all firms collect some revenue once they adopt. We show the existence of a pure strategy Nash equilibrium for this game in a duopoly market under general assumptions and provide more structural results when the time to successfully innovate is exponentially distributed. We show the uniqueness of the equilibrium for an arbitrary number of symmetric firms. We argue that for sufficiently efficient firms who are resilient against market shocks, consolidating racing firms will decrease their expected profits. We also provide an illustrative computational analysis for comparative statics, where we show the non-monotonic behavior of equilibrium investment levels as examples. It appears that the equilibrium investment level behavior in innovation can be highly dependent on firm characteristics.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"371 1","pages":"527 - 540"},"PeriodicalIF":0.0,"publicationDate":"2016-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1110651","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-24DOI: 10.1080/0740817X.2016.1167287
Ruiwei Jiang, Yongpei Guan, J. Watson
ABSTRACT Due to the sustainable nature and stimulus plans from government, renewable energy (such as wind and solar) has been increasingly used in power systems. However, the intermittency of renewable energy creates challenges for power system operators to keep the systems reliable and cost-effective. In addition, information about renewable energy is usually incomplete. Instead of knowing the true probability distribution of the renewable energy course, only a set of historical data samples can be collected from the true (while ambiguous) distribution. In this article, we study two risk-averse stochastic unit commitment models with incomplete information: the first model being a chance-constrained unit commitment model and the second one a two-stage stochastic unit commitment model with recourse. Based on historical data on renewable energy, we construct a confidence set for the probability distribution of the renewable energy and propose data-driven stochastic unit commitment models to hedge against the incomplete nature of the information. Our models also ensure that, with a high probability, a large portion of renewable energy is utilized. Furthermore, we develop solution approaches to solve the models based on deriving strong valid inequalities and Benders’ decomposition algorithms. We show that the risk-averse behavior of both models decreases as more data samples are collected and eventually vanishes as the sample size goes to infinity. Finally, our case studies verify the effectiveness of our proposed models and solution approaches.
{"title":"Risk-averse stochastic unit commitment with incomplete information","authors":"Ruiwei Jiang, Yongpei Guan, J. Watson","doi":"10.1080/0740817X.2016.1167287","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1167287","url":null,"abstract":"ABSTRACT Due to the sustainable nature and stimulus plans from government, renewable energy (such as wind and solar) has been increasingly used in power systems. However, the intermittency of renewable energy creates challenges for power system operators to keep the systems reliable and cost-effective. In addition, information about renewable energy is usually incomplete. Instead of knowing the true probability distribution of the renewable energy course, only a set of historical data samples can be collected from the true (while ambiguous) distribution. In this article, we study two risk-averse stochastic unit commitment models with incomplete information: the first model being a chance-constrained unit commitment model and the second one a two-stage stochastic unit commitment model with recourse. Based on historical data on renewable energy, we construct a confidence set for the probability distribution of the renewable energy and propose data-driven stochastic unit commitment models to hedge against the incomplete nature of the information. Our models also ensure that, with a high probability, a large portion of renewable energy is utilized. Furthermore, we develop solution approaches to solve the models based on deriving strong valid inequalities and Benders’ decomposition algorithms. We show that the risk-averse behavior of both models decreases as more data samples are collected and eventually vanishes as the sample size goes to infinity. Finally, our case studies verify the effectiveness of our proposed models and solution approaches.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"838 - 854"},"PeriodicalIF":0.0,"publicationDate":"2016-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1167287","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-24DOI: 10.1080/0740817X.2016.1163444
M. A. Bajestani, D. Banjevic
ABSTRACT In this article, we introduce an age-based replacement policy in which the preventive replacements are restricted to specific calendar times. Under the new policy, the assets are renewed at failure or if their ages are greater than or equal to a replacement age at given calendar times, whichever occurs first. This policy is logistically applicable in industries such as utilities where there are large and geographically diverse populations of deteriorating assets with different installation times. Since preventive replacements are performed at fixed times, the renewal cycles are dependent random variables. Therefore, the classic renewal reward theorem cannot be directly applied. Using the theory of Markov chains with general state space and a suitably defined ergodic measure, we analyze the problem to find the optimal replacement age, minimizing the long-run expected cost per time unit. We further find the limiting distributions of the backward and forward recurrence times for this policy and show how our ergodic measure can be used to analyze more complicated policies. Finally, using a real data set of utility wood poles’ maintenance records, we numerically illustrate some of our results including the importance of defining an appropriate ergodic measure in reducing the computational expense.
{"title":"Calendar-based age replacement policy with dependent renewal cycles","authors":"M. A. Bajestani, D. Banjevic","doi":"10.1080/0740817X.2016.1163444","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1163444","url":null,"abstract":"ABSTRACT In this article, we introduce an age-based replacement policy in which the preventive replacements are restricted to specific calendar times. Under the new policy, the assets are renewed at failure or if their ages are greater than or equal to a replacement age at given calendar times, whichever occurs first. This policy is logistically applicable in industries such as utilities where there are large and geographically diverse populations of deteriorating assets with different installation times. Since preventive replacements are performed at fixed times, the renewal cycles are dependent random variables. Therefore, the classic renewal reward theorem cannot be directly applied. Using the theory of Markov chains with general state space and a suitably defined ergodic measure, we analyze the problem to find the optimal replacement age, minimizing the long-run expected cost per time unit. We further find the limiting distributions of the backward and forward recurrence times for this policy and show how our ergodic measure can be used to analyze more complicated policies. Finally, using a real data set of utility wood poles’ maintenance records, we numerically illustrate some of our results including the importance of defining an appropriate ergodic measure in reducing the computational expense.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"1016 - 1026"},"PeriodicalIF":0.0,"publicationDate":"2016-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1163444","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-24DOI: 10.1080/0740817X.2016.1167286
Hongyue Sun, Xinwei Deng, Kaibo Wang, R. Jin
ABSTRACT Single-crystal silicon ingots are produced from a complex crystal growth process. Such a process is sensitive to subtle process condition changes, which may easily become failed and lead to the growth of a polycrystalline ingot instead of the desired monocrystalline ingot. Therefore, it is important to model this polycrystalline defect in the crystal growth process and identify key process variables and their features. However, to model the crystal growth process poses great challenges due to complicated engineering mechanisms and a large amount of functional process variables. In this article, we focus on modeling the relationship between a binary quality indicator for polycrystalline defect and functional process variables. We propose a logistic regression model with hierarchical nonnegative garrote-based variable selection method that can accurately estimate the model, identify key process variables, and capture important features. Simulations and a case study are conducted to illustrate the merits of the proposed method in prediction and variable selection.
{"title":"Logistic regression for crystal growth process modeling through hierarchical nonnegative garrote-based variable selection","authors":"Hongyue Sun, Xinwei Deng, Kaibo Wang, R. Jin","doi":"10.1080/0740817X.2016.1167286","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1167286","url":null,"abstract":"ABSTRACT Single-crystal silicon ingots are produced from a complex crystal growth process. Such a process is sensitive to subtle process condition changes, which may easily become failed and lead to the growth of a polycrystalline ingot instead of the desired monocrystalline ingot. Therefore, it is important to model this polycrystalline defect in the crystal growth process and identify key process variables and their features. However, to model the crystal growth process poses great challenges due to complicated engineering mechanisms and a large amount of functional process variables. In this article, we focus on modeling the relationship between a binary quality indicator for polycrystalline defect and functional process variables. We propose a logistic regression model with hierarchical nonnegative garrote-based variable selection method that can accurately estimate the model, identify key process variables, and capture important features. Simulations and a case study are conducted to illustrate the merits of the proposed method in prediction and variable selection.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"787 - 796"},"PeriodicalIF":0.0,"publicationDate":"2016-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1167286","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-24DOI: 10.1080/0740817X.2016.1167289
Mei Han, Matthias Hwai Yong Tan
ABSTRACT Robust parameter and tolerance design are effective methods to improve process quality. It is reported in the literature that the traditional two-stage approach that performs parameter design followed by tolerance design to reduce the sensitivity to variations of input characteristics is suboptimal. To mitigate the problem, an integrated parameter and tolerance design (IPTD) methodology that is suitable for linear models is suggested. In this article, a computer-aided IPTD approach for computer experiments is proposed, in which the means and tolerances of input characteristics are simultaneously optimized to minimize the total cost. A Gaussian process metamodel is used to emulate the response function to reduce the number of simulations. A closed-form expression for the posterior expected quality loss is derived to facilitate optimization in computer-aided IPTD. As there is often uncertainty about the true quality and tolerance costs, multiobjective optimization with quality loss and tolerance cost as objective functions is proposed to find robust optimal solutions.
{"title":"Integrated parameter and tolerance design with computer experiments","authors":"Mei Han, Matthias Hwai Yong Tan","doi":"10.1080/0740817X.2016.1167289","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1167289","url":null,"abstract":"ABSTRACT Robust parameter and tolerance design are effective methods to improve process quality. It is reported in the literature that the traditional two-stage approach that performs parameter design followed by tolerance design to reduce the sensitivity to variations of input characteristics is suboptimal. To mitigate the problem, an integrated parameter and tolerance design (IPTD) methodology that is suitable for linear models is suggested. In this article, a computer-aided IPTD approach for computer experiments is proposed, in which the means and tolerances of input characteristics are simultaneously optimized to minimize the total cost. A Gaussian process metamodel is used to emulate the response function to reduce the number of simulations. A closed-form expression for the posterior expected quality loss is derived to facilitate optimization in computer-aided IPTD. As there is often uncertainty about the true quality and tolerance costs, multiobjective optimization with quality loss and tolerance cost as objective functions is proposed to find robust optimal solutions.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"1004 - 1015"},"PeriodicalIF":0.0,"publicationDate":"2016-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1167289","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-23DOI: 10.1080/0740817X.2015.1122252
I. Yu
ABSTRACT In this article, we extend the modified Box–Meyer method and propose an approach to identify both active location and dispersion factors in a screening experiment. Since several candidate models can be simultaneously considered under the framework of Bayesian model averaging, the proposed method can overcome the problem of missing the identification of some active factors caused by either the alias structure or misspecification of the location model. For illustration, three practical experiments and one synthetic data set are analyzed.
{"title":"A Bayesian approach to the identification of active location and dispersion factors","authors":"I. Yu","doi":"10.1080/0740817X.2015.1122252","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1122252","url":null,"abstract":"ABSTRACT In this article, we extend the modified Box–Meyer method and propose an approach to identify both active location and dispersion factors in a screening experiment. Since several candidate models can be simultaneously considered under the framework of Bayesian model averaging, the proposed method can overcome the problem of missing the identification of some active factors caused by either the alias structure or misspecification of the location model. For illustration, three practical experiments and one synthetic data set are analyzed.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"629 - 637"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1122252","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}