Pub Date : 2016-05-26DOI: 10.1080/0740817X.2016.1189631
Mostafa Abouei Ardakan, M. Sima, Ali Zeinal Hamadani, D. Coit
ABSTRACT This article presents a new interpretation and formulation of the Reliability–Redundancy Allocation Problem (RRAP) and demonstrates that solutions to this new problem provide distinct advantages compared with traditional approaches. Using redundant components is a common method to increase the reliability of a system. In order to add the redundant components to a system or a subsystem, there are two traditional types of strategies called active and standby redundancy. Recently a new redundancy strategy, called the “mixed” strategy, has been introduced. It has been proved that in the Redundancy Allocation Problem (RAP), this new strategy has a better performance compared with active and standby strategies alone. In this article, the recently introduced mixed strategy is implemented in the RRAP, which is more complicated than the RAP, and the results of using the mixed strategy are compared with the active and standby strategies. To analyze the performance of the new approach, some benchmark problems on the RRAP are selected and the mixed strategy is used to optimize the system reliability in these situations. Finally, the reliability of benchmark problems with the mixed strategy is compared with the best results of the systems when active or standby strategies are considered. The final results show that the mixed strategy results in an improvement in the reliability of all the benchmark problems and the new strategy outperforms the active and standby strategies in RRAP.
{"title":"A novel strategy for redundant components in reliability--redundancy allocation problems","authors":"Mostafa Abouei Ardakan, M. Sima, Ali Zeinal Hamadani, D. Coit","doi":"10.1080/0740817X.2016.1189631","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1189631","url":null,"abstract":"ABSTRACT This article presents a new interpretation and formulation of the Reliability–Redundancy Allocation Problem (RRAP) and demonstrates that solutions to this new problem provide distinct advantages compared with traditional approaches. Using redundant components is a common method to increase the reliability of a system. In order to add the redundant components to a system or a subsystem, there are two traditional types of strategies called active and standby redundancy. Recently a new redundancy strategy, called the “mixed” strategy, has been introduced. It has been proved that in the Redundancy Allocation Problem (RAP), this new strategy has a better performance compared with active and standby strategies alone. In this article, the recently introduced mixed strategy is implemented in the RRAP, which is more complicated than the RAP, and the results of using the mixed strategy are compared with the active and standby strategies. To analyze the performance of the new approach, some benchmark problems on the RRAP are selected and the mixed strategy is used to optimize the system reliability in these situations. Finally, the reliability of benchmark problems with the mixed strategy is compared with the best results of the systems when active or standby strategies are considered. The final results show that the mixed strategy results in an improvement in the reliability of all the benchmark problems and the new strategy outperforms the active and standby strategies in RRAP.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"1043 - 1057"},"PeriodicalIF":0.0,"publicationDate":"2016-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1189631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-26DOI: 10.1080/0740817X.2016.1189630
James Cao, K. C. So
ABSTRACT This article examines the value of demand forecast updates in an assembly system where a single assembler must order components from independent suppliers with different lead times. By staggering each ordering time, the assembler can utilize the latest market information, as it is developed, to form a better forecast over time. The updated forecast can subsequently be used to decide the following procurement decision. The objective of this research is to understand the specific operating environment under which demand forecast updates are most beneficial. Using a uniform demand adjustment model, we are able to derive analytical results that allow us to quantify the impact of demand forecast updates. We show that forecast updates can drastically improve profitability by reducing the mismatch cost caused by demand uncertainty.
{"title":"The value of demand forecast updates in managing component procurement for assembly systems","authors":"James Cao, K. C. So","doi":"10.1080/0740817X.2016.1189630","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1189630","url":null,"abstract":"ABSTRACT This article examines the value of demand forecast updates in an assembly system where a single assembler must order components from independent suppliers with different lead times. By staggering each ordering time, the assembler can utilize the latest market information, as it is developed, to form a better forecast over time. The updated forecast can subsequently be used to decide the following procurement decision. The objective of this research is to understand the specific operating environment under which demand forecast updates are most beneficial. Using a uniform demand adjustment model, we are able to derive analytical results that allow us to quantify the impact of demand forecast updates. We show that forecast updates can drastically improve profitability by reducing the mismatch cost caused by demand uncertainty.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"1198 - 1216"},"PeriodicalIF":0.0,"publicationDate":"2016-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1189630","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-03DOI: 10.1080/0740817X.2015.1063791
Lixin Tang, Defeng Sun, Jiyin Liu
ABSTRACT This study is motivated by the practices of large iron and steel companies that have steady and heavy demands for bulk raw materials, such as iron ore, coal, limestone, etc. These materials are usually transported to a bulk cargo terminal by ships (or to a station by trains). Once unloaded, they are moved to and stored in a bulk material stockyard, waiting for retrieval for use in production. Efficient storage space allocation and ship scheduling are critical to achieving high space utilization, low material loss, and low transportation costs. In this article, we study the integrated storage space allocation and ship scheduling problem in the bulk cargo terminal. Our problem is different from other associated problems due to the special way that the materials are transported and stored. A novel mixed-integer programming model is developed and then solved using a Benders decomposition algorithm, which is enhanced by the use of various valid inequalities, combinatorial Benders cuts, variable reduction tests, and an iterative heuristic procedure. Computational results indicate that the proposed solution method is much more efficient than the standard solution software CPLEX.
{"title":"Integrated storage space allocation and ship scheduling problem in bulk cargo terminals","authors":"Lixin Tang, Defeng Sun, Jiyin Liu","doi":"10.1080/0740817X.2015.1063791","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1063791","url":null,"abstract":"ABSTRACT This study is motivated by the practices of large iron and steel companies that have steady and heavy demands for bulk raw materials, such as iron ore, coal, limestone, etc. These materials are usually transported to a bulk cargo terminal by ships (or to a station by trains). Once unloaded, they are moved to and stored in a bulk material stockyard, waiting for retrieval for use in production. Efficient storage space allocation and ship scheduling are critical to achieving high space utilization, low material loss, and low transportation costs. In this article, we study the integrated storage space allocation and ship scheduling problem in the bulk cargo terminal. Our problem is different from other associated problems due to the special way that the materials are transported and stored. A novel mixed-integer programming model is developed and then solved using a Benders decomposition algorithm, which is enhanced by the use of various valid inequalities, combinatorial Benders cuts, variable reduction tests, and an iterative heuristic procedure. Computational results indicate that the proposed solution method is much more efficient than the standard solution software CPLEX.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"428 - 439"},"PeriodicalIF":0.0,"publicationDate":"2016-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1063791","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-03DOI: 10.1080/0740817X.2015.1056392
Y. Bukchin, E. Wexler
Abstract The effect of workers’ learning curves on the production rate in manual assembly lines is significant when producing relatively small batches of different products. This research studies this effect and suggests applying a work-sharing mechanism among the workers to improve the makespan (time to complete the batch). The proposed mechanism suggests that adjacent cross-trained workers will help each other in order to reduce idle times caused by blockage and starvation. The effect of work sharing and buffers on the makespan is studied and compared with a baseline situation, where the line does not contain any buffers and work sharing is not applied. Several linear programming and mixed-integer linear programming formulations for makespan minimization are presented. These formulations provide optimal work allocations to stations and optimal parameters of the work-sharing mechanism. A numerical study is conducted to examine the effect of buffers and work sharing on the makespan reduction in different environment settings. Numerical results are given along with some recommendations regarding the system design and operation.
{"title":"The effect of buffers and work sharing on makespan improvement of small batches in assembly lines under learning effects","authors":"Y. Bukchin, E. Wexler","doi":"10.1080/0740817X.2015.1056392","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1056392","url":null,"abstract":"Abstract The effect of workers’ learning curves on the production rate in manual assembly lines is significant when producing relatively small batches of different products. This research studies this effect and suggests applying a work-sharing mechanism among the workers to improve the makespan (time to complete the batch). The proposed mechanism suggests that adjacent cross-trained workers will help each other in order to reduce idle times caused by blockage and starvation. The effect of work sharing and buffers on the makespan is studied and compared with a baseline situation, where the line does not contain any buffers and work sharing is not applied. Several linear programming and mixed-integer linear programming formulations for makespan minimization are presented. These formulations provide optimal work allocations to stations and optimal parameters of the work-sharing mechanism. A numerical study is conducted to examine the effect of buffers and work sharing on the makespan reduction in different environment settings. Numerical results are given along with some recommendations regarding the system design and operation.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"403 - 414"},"PeriodicalIF":0.0,"publicationDate":"2016-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1056392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-25DOI: 10.1080/0740817X.2015.1133940
Yanting Li, L. Shu, F. Tsung
ABSTRACT The spatial scan statistic is one of the main tools for testing the presence of clusters in a geographical region. The recently proposed Fast Subset Scan (FSS) method represents an important extension, as it is computationally efficient and enables detection of clusters with arbitrary shapes. Aimed at automatically and simultaneously detecting multiple clusters of any shapes, this article explores the False Discovery (FD) approach originated from multiple hypothesis testing. We show that the FD approach can provide a higher detection power and better identification capability than the standard scan and FSS methods, on average.
{"title":"A false discovery approach for scanning spatial disease clusters with arbitrary shapes","authors":"Yanting Li, L. Shu, F. Tsung","doi":"10.1080/0740817X.2015.1133940","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1133940","url":null,"abstract":"ABSTRACT The spatial scan statistic is one of the main tools for testing the presence of clusters in a geographical region. The recently proposed Fast Subset Scan (FSS) method represents an important extension, as it is computationally efficient and enables detection of clusters with arbitrary shapes. Aimed at automatically and simultaneously detecting multiple clusters of any shapes, this article explores the False Discovery (FD) approach originated from multiple hypothesis testing. We show that the FD approach can provide a higher detection power and better identification capability than the standard scan and FSS methods, on average.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"684 - 698"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1133940","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-13DOI: 10.1080/0740817X.2016.1172743
Y. Shu, Q. Feng, E. P. Kao, Hao Liu
ABSTRACT We use Lévy subordinators and non-Gaussian Ornstein–Uhlenbeck processes to model the evolution of degradation with random jumps. The superiority of our models stems from the flexibility of such processes in the modeling of stylized features of degradation data series such as jumps, linearity/nonlinearity, symmetry/asymmetry, and light/heavy tails. Based on corresponding Fokker–Planck equations, we derive explicit results for the reliability function and lifetime moments in terms of Laplace transforms, represented by Lévy measures. Numerical experiments are used to demonstrate that our general models perform well and are applicable for analyzing a large number of degradation phenomena. More important, they provide us with a new methodology to deal with multi-degradation processes under dynamicenvironments.
{"title":"Lévy-driven non-Gaussian Ornstein–Uhlenbeck processes for degradation-based reliability analysis","authors":"Y. Shu, Q. Feng, E. P. Kao, Hao Liu","doi":"10.1080/0740817X.2016.1172743","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1172743","url":null,"abstract":"ABSTRACT We use Lévy subordinators and non-Gaussian Ornstein–Uhlenbeck processes to model the evolution of degradation with random jumps. The superiority of our models stems from the flexibility of such processes in the modeling of stylized features of degradation data series such as jumps, linearity/nonlinearity, symmetry/asymmetry, and light/heavy tails. Based on corresponding Fokker–Planck equations, we derive explicit results for the reliability function and lifetime moments in terms of Laplace transforms, represented by Lévy measures. Numerical experiments are used to demonstrate that our general models perform well and are applicable for analyzing a large number of degradation phenomena. More important, they provide us with a new methodology to deal with multi-degradation processes under dynamicenvironments.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"1003 - 993"},"PeriodicalIF":0.0,"publicationDate":"2016-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1172743","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-06DOI: 10.1080/0740817X.2016.1156788
Siyang Gao, Weiwei Chen
ABSTRACT In this article, the problem of selecting an optimal subset from a finite set of simulated designs is considered. Given the total simulation budget constraint, the selection problem aims to maximize the Probability of Correct Selection (PCS) of the top m designs. To simplify the complexity of the PCS, an approximated probability measure is developed and an asymptotically optimal solution of the resulting problem is derived. A subset selection procedure, which is easy to implement in practice, is then designed. More important, we provide some useful insights on characterizing an efficient subset selection rule and how it can be achieved by adjusting the simulation budgets allocated to all of the designs.
{"title":"A new budget allocation framework for selecting top simulated designs","authors":"Siyang Gao, Weiwei Chen","doi":"10.1080/0740817X.2016.1156788","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1156788","url":null,"abstract":"ABSTRACT In this article, the problem of selecting an optimal subset from a finite set of simulated designs is considered. Given the total simulation budget constraint, the selection problem aims to maximize the Probability of Correct Selection (PCS) of the top m designs. To simplify the complexity of the PCS, an approximated probability measure is developed and an asymptotically optimal solution of the resulting problem is derived. A subset selection procedure, which is easy to implement in practice, is then designed. More important, we provide some useful insights on characterizing an efficient subset selection rule and how it can be achieved by adjusting the simulation budgets allocated to all of the designs.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"855 - 863"},"PeriodicalIF":0.0,"publicationDate":"2016-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1156788","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-06DOI: 10.1080/0740817X.2016.1163443
C. Alexopoulos, D. Goldsman, Peng Tang, James R. Wilson
ABSTRACT This article presents SPSTS, an automated sequential procedure for computing point and Confidence-Interval (CI) estimators for the steady-state mean of a simulation-generated process subject to user-specified requirements for the CI coverage probability and relative half-length. SPSTS is the first sequential method based on Standardized Time Series (STS) area estimators of the steady-state variance parameter (i.e., the sum of covariances at all lags). Whereas its leading competitors rely on the method of batch means to remove bias due to the initial transient, estimate the variance parameter, and compute the CI, SPSTS relies on the signed areas corresponding to two orthonormal STS area variance estimators for these tasks. In successive stages of SPSTS, standard tests for normality and independence are applied to the signed areas to determine (i) the length of the warm-up period, and (ii) a batch size sufficient to ensure adequate convergence of the associated STS area variance estimators to their limiting chi-squared distributions. SPSTS's performance is compared experimentally with that of recent batch-means methods using selected test problems of varying degrees of difficulty. SPSTS performed comparatively well in terms of its average required sample size as well as the coverage and average half-length of the final CIs.
{"title":"SPSTS: A sequential procedure for estimating the steady-state mean using standardized time series","authors":"C. Alexopoulos, D. Goldsman, Peng Tang, James R. Wilson","doi":"10.1080/0740817X.2016.1163443","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1163443","url":null,"abstract":"ABSTRACT This article presents SPSTS, an automated sequential procedure for computing point and Confidence-Interval (CI) estimators for the steady-state mean of a simulation-generated process subject to user-specified requirements for the CI coverage probability and relative half-length. SPSTS is the first sequential method based on Standardized Time Series (STS) area estimators of the steady-state variance parameter (i.e., the sum of covariances at all lags). Whereas its leading competitors rely on the method of batch means to remove bias due to the initial transient, estimate the variance parameter, and compute the CI, SPSTS relies on the signed areas corresponding to two orthonormal STS area variance estimators for these tasks. In successive stages of SPSTS, standard tests for normality and independence are applied to the signed areas to determine (i) the length of the warm-up period, and (ii) a batch size sufficient to ensure adequate convergence of the associated STS area variance estimators to their limiting chi-squared distributions. SPSTS's performance is compared experimentally with that of recent batch-means methods using selected test problems of varying degrees of difficulty. SPSTS performed comparatively well in terms of its average required sample size as well as the coverage and average half-length of the final CIs.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"864 - 880"},"PeriodicalIF":0.0,"publicationDate":"2016-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1163443","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-02DOI: 10.1080/0740817X.2015.1122253
Erik T. S. Bjarnason, S. Taghipour
ABSTRACT We investigate the maintenance and inventory policy for a k-out-of-n system where the components' failures are hidden and follow a non-homogeneous Poisson process. Two types of inspections are performed to find failed components: planned periodic inspections and unplanned opportunistic inspections. The latter are performed at system failure times when n − k +1 components are simultaneously down. In all cases, the failed components are either minimally repaired or replaced with spare parts from the inventory. The inventory is replenished either periodically or when the system fails. The periodic orders have a random lead-time, but there is no lead-time for emergency orders, as these are placed at system failure times. The key objective is to develop a method to solve the joint maintenance and inventory problem for systems with a large number of components, long planning horizon, and large inventory. We construct a simulation model to jointly optimize the periodic inspection interval, the periodic reorder interval, and periodic and emergency order-up-to levels. Due to the large search space, it is infeasible to try all possible combinations of decision variables in a reasonable amount of time. Thus, the simulation model is integrated with a heuristic search algorithm to obtain the optimal solution.
{"title":"Periodic inspection frequency and inventory policies for a k-out-of-n system","authors":"Erik T. S. Bjarnason, S. Taghipour","doi":"10.1080/0740817X.2015.1122253","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1122253","url":null,"abstract":"ABSTRACT We investigate the maintenance and inventory policy for a k-out-of-n system where the components' failures are hidden and follow a non-homogeneous Poisson process. Two types of inspections are performed to find failed components: planned periodic inspections and unplanned opportunistic inspections. The latter are performed at system failure times when n − k +1 components are simultaneously down. In all cases, the failed components are either minimally repaired or replaced with spare parts from the inventory. The inventory is replenished either periodically or when the system fails. The periodic orders have a random lead-time, but there is no lead-time for emergency orders, as these are placed at system failure times. The key objective is to develop a method to solve the joint maintenance and inventory problem for systems with a large number of components, long planning horizon, and large inventory. We construct a simulation model to jointly optimize the periodic inspection interval, the periodic reorder interval, and periodic and emergency order-up-to levels. Due to the large search space, it is infeasible to try all possible combinations of decision variables in a reasonable amount of time. Thus, the simulation model is integrated with a heuristic search algorithm to obtain the optimal solution.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"638 - 650"},"PeriodicalIF":0.0,"publicationDate":"2016-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1122253","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-30DOI: 10.1080/0740817X.2015.1122254
K. Bastani, Prahalada K. Rao, Z. Kong
ABSTRACT The objective of this work is to realize real-time monitoring of process conditions in advanced manufacturing using multiple heterogeneous sensor signals. To achieve this objective we propose an approach invoking the concept of sparse estimation called online sparse estimation-based classification (OSEC). The novelty of the OSEC approach is in representing data from sensor signals as an underdetermined linear system of equations and subsequently solving the underdetermined linear system using a newly developed greedy Bayesian estimation method. We apply the OSEC approach to two advanced manufacturing scenarios, namely, a fused filament fabrication additive manufacturing process and an ultraprecision semiconductor chemical–mechanical planarization process. Using the proposed OSEC approach, process drifts are detected and classified with higher accuracy compared with popular machine learning techniques. Process drifts were detected and classified with a fidelity approaching 90% (F-score) using OSEC. In comparison, conventional signal analysis techniques—e.g., neural networks, support vector machines, quadratic discriminant analysis, naïve Bayes—were evaluated with F-score in the range of 40% to 70%.
{"title":"An online sparse estimation-based classification approach for real-time monitoring in advanced manufacturing processes from heterogeneous sensor data","authors":"K. Bastani, Prahalada K. Rao, Z. Kong","doi":"10.1080/0740817X.2015.1122254","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1122254","url":null,"abstract":"ABSTRACT The objective of this work is to realize real-time monitoring of process conditions in advanced manufacturing using multiple heterogeneous sensor signals. To achieve this objective we propose an approach invoking the concept of sparse estimation called online sparse estimation-based classification (OSEC). The novelty of the OSEC approach is in representing data from sensor signals as an underdetermined linear system of equations and subsequently solving the underdetermined linear system using a newly developed greedy Bayesian estimation method. We apply the OSEC approach to two advanced manufacturing scenarios, namely, a fused filament fabrication additive manufacturing process and an ultraprecision semiconductor chemical–mechanical planarization process. Using the proposed OSEC approach, process drifts are detected and classified with higher accuracy compared with popular machine learning techniques. Process drifts were detected and classified with a fidelity approaching 90% (F-score) using OSEC. In comparison, conventional signal analysis techniques—e.g., neural networks, support vector machines, quadratic discriminant analysis, naïve Bayes—were evaluated with F-score in the range of 40% to 70%.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"165 1","pages":"579 - 598"},"PeriodicalIF":0.0,"publicationDate":"2016-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1122254","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}