Pub Date : 2020-02-10DOI: 10.1080/01966324.2020.1718568
M. J. S. Khan, A. Sharma, S. Iqrar
Abstract In this paper, we have deduced the exact and explicit expressions for single and product moments of Lindley distribution based on generalized order statistics in terms of Gauss hypergeometric function and Kampé de Fériet series. These results include the exact expression for the single and product moments of order statistics, progressive Type II censoring, record values Pfeifer’s record value and sequential order statistics from Lindley distribution. Further, means and variances of Lindley distribution based on order statistics, progressive type II censored order statistics and for generalized order statistics have been computed. We have also calculated the best linear unbiased estimators for location and scale parameters of Lindley distribution utilizing these results. Finally, a real data application is given.
{"title":"On Moments of Lindley Distribution Based on Generalized Order Statistics","authors":"M. J. S. Khan, A. Sharma, S. Iqrar","doi":"10.1080/01966324.2020.1718568","DOIUrl":"https://doi.org/10.1080/01966324.2020.1718568","url":null,"abstract":"Abstract In this paper, we have deduced the exact and explicit expressions for single and product moments of Lindley distribution based on generalized order statistics in terms of Gauss hypergeometric function and Kampé de Fériet series. These results include the exact expression for the single and product moments of order statistics, progressive Type II censoring, record values Pfeifer’s record value and sequential order statistics from Lindley distribution. Further, means and variances of Lindley distribution based on order statistics, progressive type II censored order statistics and for generalized order statistics have been computed. We have also calculated the best linear unbiased estimators for location and scale parameters of Lindley distribution utilizing these results. Finally, a real data application is given.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"214 - 233"},"PeriodicalIF":0.0,"publicationDate":"2020-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1718568","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48635705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-30DOI: 10.1080/01966324.2020.1716281
J. Kleijnen, Wim C. M. van Beers
Abstract Kriging—or Gaussian process (GP) modeling—is an interpolation method assuming that the outputs (responses) are more correlated, as the inputs (explanatory or independent variables) are closer. Such a GP has unknown (hyper)parameters that are usually estimated through the maximum-likelihood method. Big data, however, make it problematic to compute these estimated parameters, and the corresponding Kriging predictor and its predictor variance. To solve this problem, some authors select a relatively small subset from the big set of previously observed “old” data. These selection methods are sequential, and they depend on the variance of the Kriging predictor; this variance requires a specific Kriging model and the estimation of its parameters. The resulting designs turn out to be “local”; i.e., most selected old input combinations are concentrated around the new combination to be predicted. We develop a simpler one-shot (fixed-sample, non-sequential) design; i.e., from the big data set we select a small subset with the nearest neighbors of the new combination. To compare our designs and the sequential designs empirically, we use the squared prediction errors, in several numerical experiments. These experiments show that our design may yield reasonable performance.
{"title":"Prediction for Big Data Through Kriging: Small Sequential and One-Shot Designs","authors":"J. Kleijnen, Wim C. M. van Beers","doi":"10.1080/01966324.2020.1716281","DOIUrl":"https://doi.org/10.1080/01966324.2020.1716281","url":null,"abstract":"Abstract Kriging—or Gaussian process (GP) modeling—is an interpolation method assuming that the outputs (responses) are more correlated, as the inputs (explanatory or independent variables) are closer. Such a GP has unknown (hyper)parameters that are usually estimated through the maximum-likelihood method. Big data, however, make it problematic to compute these estimated parameters, and the corresponding Kriging predictor and its predictor variance. To solve this problem, some authors select a relatively small subset from the big set of previously observed “old” data. These selection methods are sequential, and they depend on the variance of the Kriging predictor; this variance requires a specific Kriging model and the estimation of its parameters. The resulting designs turn out to be “local”; i.e., most selected old input combinations are concentrated around the new combination to be predicted. We develop a simpler one-shot (fixed-sample, non-sequential) design; i.e., from the big data set we select a small subset with the nearest neighbors of the new combination. To compare our designs and the sequential designs empirically, we use the squared prediction errors, in several numerical experiments. These experiments show that our design may yield reasonable performance.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"199 - 213"},"PeriodicalIF":0.0,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1716281","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46860746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/01966324.2019.1570882
F. Guessoum, S. Haddadi, E. Gattal
SYNOPTIC ABSTRACT The nurse rostering problem is to create a day-to-day shift assignment of each nurse subject to a predefined set of constraints. Based on simple ideas, a two-phase method is suggested. The first phase consists of applying a generic variable-fixing heuristic. As a result, a very small and sparse-reduced problem is obtained. In the second phase, the reduced problem is solved by utilizing a general-purpose MIP solver. The proposed method is tested on NSPLib dataset, and the results obtained show that it is capable of identifying high quality solutions. When compared with recently developed methods, it turns out to be the fastest.
{"title":"Simple, Yet Fast and Effective Two-Phase Method for Nurse Rostering","authors":"F. Guessoum, S. Haddadi, E. Gattal","doi":"10.1080/01966324.2019.1570882","DOIUrl":"https://doi.org/10.1080/01966324.2019.1570882","url":null,"abstract":"SYNOPTIC ABSTRACT The nurse rostering problem is to create a day-to-day shift assignment of each nurse subject to a predefined set of constraints. Based on simple ideas, a two-phase method is suggested. The first phase consists of applying a generic variable-fixing heuristic. As a result, a very small and sparse-reduced problem is obtained. In the second phase, the reduced problem is solved by utilizing a general-purpose MIP solver. The proposed method is tested on NSPLib dataset, and the results obtained show that it is capable of identifying high quality solutions. When compared with recently developed methods, it turns out to be the fastest.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1570882","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48341688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/01966324.2019.1579124
Shuvashree Mondal, D. Kundu
SYNOPTIC ABSTRACT Progressive censoring schemes have received considerable attention recently. All of these developments are mainly based on a single population. Recently, Mondal and Kundu (2016) introduced the balanced joint progressive censoring scheme (BJPC), and studied the exact inference for two exponential populations. It is well known that the exponential distribution has some limitations. In this article, we implement the BJPC scheme on two Weibull populations with the common shape parameter. The treatment here is purely Bayesian in nature. Under the Bayesian set up we assume a Beta Gamma prior of the scale parameters, and an independent Gamma prior for the common shape parameter. Under these prior assumptions, the Bayes estimators cannot be obtained in closed forms, and we use the importance sampling technique to compute the Bayes estimators and the associated credible intervals. We further consider the order restricted Bayesian inference of the parameters based on the ordered Beta Gamma priors of the scale parameters. We propose one precision criteria based on expected volume of the joint credible set of model parameters to find out the optimum censoring scheme. We perform extensive simulation experiments to study the performance of the estimators, and finally analyze one real data set for illustrative purposes.
{"title":"Bayesian Inference for Weibull Distribution under the Balanced Joint Type-II Progressive Censoring Scheme","authors":"Shuvashree Mondal, D. Kundu","doi":"10.1080/01966324.2019.1579124","DOIUrl":"https://doi.org/10.1080/01966324.2019.1579124","url":null,"abstract":"SYNOPTIC ABSTRACT Progressive censoring schemes have received considerable attention recently. All of these developments are mainly based on a single population. Recently, Mondal and Kundu (2016) introduced the balanced joint progressive censoring scheme (BJPC), and studied the exact inference for two exponential populations. It is well known that the exponential distribution has some limitations. In this article, we implement the BJPC scheme on two Weibull populations with the common shape parameter. The treatment here is purely Bayesian in nature. Under the Bayesian set up we assume a Beta Gamma prior of the scale parameters, and an independent Gamma prior for the common shape parameter. Under these prior assumptions, the Bayes estimators cannot be obtained in closed forms, and we use the importance sampling technique to compute the Bayes estimators and the associated credible intervals. We further consider the order restricted Bayesian inference of the parameters based on the ordered Beta Gamma priors of the scale parameters. We propose one precision criteria based on expected volume of the joint credible set of model parameters to find out the optimum censoring scheme. We perform extensive simulation experiments to study the performance of the estimators, and finally analyze one real data set for illustrative purposes.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"56 - 74"},"PeriodicalIF":0.0,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1579124","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46767411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/01966324.2019.1570883
Ajit Chaturvedi, Sudeep R. Bapat, Neeraj Joshi
SYNOPTIC ABSTRACT In the first part of this article, a minimum risk estimation procedure is developed for estimating the mean μ of an inverse Gaussian distribution having an unknown scale parameter λ. A weighted squared-error loss function is assumed, and we aim at controlling the associated risk function. First and second-order asymptotic properties are also established for our stopping rule. The second part deals with developing a minimum risk estimation procedure for estimating the scale parameter λ of an inverse Gaussian distribution. We make use of a squared-error loss function here. The failure of a fixed sample size procedure is established and, hence, some sequential procedures are proposed to deal with this situation. For this estimation problem, we make use of the uniformly minimum variance unbiased estimator (UMVUE) and the minimum mean square estimator (MMSE) of the associated parameters. Second-order approximations are derived for the sequential procedures and improved estimators are proposed.
{"title":"Sequential Minimum Risk Point Estimation of the Parameters of an Inverse Gaussian Distribution","authors":"Ajit Chaturvedi, Sudeep R. Bapat, Neeraj Joshi","doi":"10.1080/01966324.2019.1570883","DOIUrl":"https://doi.org/10.1080/01966324.2019.1570883","url":null,"abstract":"SYNOPTIC ABSTRACT In the first part of this article, a minimum risk estimation procedure is developed for estimating the mean μ of an inverse Gaussian distribution having an unknown scale parameter λ. A weighted squared-error loss function is assumed, and we aim at controlling the associated risk function. First and second-order asymptotic properties are also established for our stopping rule. The second part deals with developing a minimum risk estimation procedure for estimating the scale parameter λ of an inverse Gaussian distribution. We make use of a squared-error loss function here. The failure of a fixed sample size procedure is established and, hence, some sequential procedures are proposed to deal with this situation. For this estimation problem, we make use of the uniformly minimum variance unbiased estimator (UMVUE) and the minimum mean square estimator (MMSE) of the associated parameters. Second-order approximations are derived for the sequential procedures and improved estimators are proposed.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"20 - 40"},"PeriodicalIF":0.0,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1570883","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43374777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/01966324.2019.1642817
K. Ajith, E. I. Abdul Sathar
SYNOPTIC ABSTRACT Recently, the concept of dynamic Varma’s entropy has been proposed in the literature. In this article, we propose weighted forms of Varma’s and dynamic Varma’s entropy measures. We discuss several properties of proposed measures, including uniquely determine property, effect of linear transformation, and bounds. We also discuss some new ageing classes, characterization results, and relationship of proposed measures with some well-known reliability measures.
{"title":"Some Results on Dynamic Weighted Varma’s Entropy and its Applications","authors":"K. Ajith, E. I. Abdul Sathar","doi":"10.1080/01966324.2019.1642817","DOIUrl":"https://doi.org/10.1080/01966324.2019.1642817","url":null,"abstract":"SYNOPTIC ABSTRACT Recently, the concept of dynamic Varma’s entropy has been proposed in the literature. In this article, we propose weighted forms of Varma’s and dynamic Varma’s entropy measures. We discuss several properties of proposed measures, including uniquely determine property, effect of linear transformation, and bounds. We also discuss some new ageing classes, characterization results, and relationship of proposed measures with some well-known reliability measures.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"90 - 98"},"PeriodicalIF":0.0,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1642817","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48642192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/01966324.2019.1580644
Mahendra Saha, Sumit Kumar, Sudhansu S. Maiti, Abhimanyu Singh Yadav, S. Dey
SYNOPTIC ABSTRACT Process capability indices (PCIs) have been widely applied in measuring product potential and performance. It is of great significance to quality control engineers, as it quantifies the relation between the actual performance of the process and the preset specifications of the product. Among the plethora of the suggested PCIs, most of them were developed for normally distributed processes. In this article, we consider generalized process capability index Cpy suggested by Maiti et al. (2010), which can be used for normal, non-normal, and continuous as well as discrete random variables. The objective of this article is twofold. First, we obtain maximum likelihood estimator (MLE) and minimum variance unbiased estimator (MVUE) of the PCI Cpy for the Lindley distributed quality characteristics. Second, we compare asymptotic confidence interval (ACI) with four bootstrap confidence intervals (BCIs); namely, standard bootstrap (s-boot), percentile bootstrap (p-boot), Student’s t bootstrap (t-boot), and bias-corrected accelerated bootstrap (BCa-boot) of Cpy based on maximum likelihood method of estimation. Monte Carlo simulations have been carried out to compare the performance of MLEs and MVUEs, and also investigate the average widths, coverage probabilities, and relative coverages of ACI and BCIs of Cpy. Two real data sets have been analyzed for illustrative purposes.
过程能力指数在产品潜力和性能的测量中得到了广泛的应用。它量化了工艺的实际性能与产品的预设规格之间的关系,对质量控制工程师具有重要意义。在众多建议的pci中,大多数是为正态分布的进程开发的。本文考虑Maiti et al.(2010)提出的广义过程能力指标Cpy,该指标可用于正态、非正态、连续和离散随机变量。本文的目的是双重的。首先,我们得到了PCI Cpy的Lindley分布质量特征的最大似然估计量(MLE)和最小方差无偏估计量(MVUE)。其次,我们比较了渐近置信区间(ACI)与四个自举置信区间(bci);即基于最大似然估计法的Cpy的标准引导(s-boot)、百分位引导(p-boot)、学生t引导(t-boot)和偏差校正加速引导(BCa-boot)。通过蒙特卡罗模拟,比较了mle和mue的性能,并研究了Cpy的ACI和bci的平均宽度、覆盖概率和相对覆盖率。为了说明问题,分析了两个真实的数据集。
{"title":"Asymptotic and Bootstrap Confidence Intervals for the Process Capability Index cpy Based on Lindley Distributed Quality Characteristic","authors":"Mahendra Saha, Sumit Kumar, Sudhansu S. Maiti, Abhimanyu Singh Yadav, S. Dey","doi":"10.1080/01966324.2019.1580644","DOIUrl":"https://doi.org/10.1080/01966324.2019.1580644","url":null,"abstract":"SYNOPTIC ABSTRACT Process capability indices (PCIs) have been widely applied in measuring product potential and performance. It is of great significance to quality control engineers, as it quantifies the relation between the actual performance of the process and the preset specifications of the product. Among the plethora of the suggested PCIs, most of them were developed for normally distributed processes. In this article, we consider generalized process capability index Cpy suggested by Maiti et al. (2010), which can be used for normal, non-normal, and continuous as well as discrete random variables. The objective of this article is twofold. First, we obtain maximum likelihood estimator (MLE) and minimum variance unbiased estimator (MVUE) of the PCI Cpy for the Lindley distributed quality characteristics. Second, we compare asymptotic confidence interval (ACI) with four bootstrap confidence intervals (BCIs); namely, standard bootstrap (s-boot), percentile bootstrap (p-boot), Student’s t bootstrap (t-boot), and bias-corrected accelerated bootstrap (BCa-boot) of Cpy based on maximum likelihood method of estimation. Monte Carlo simulations have been carried out to compare the performance of MLEs and MVUEs, and also investigate the average widths, coverage probabilities, and relative coverages of ACI and BCIs of Cpy. Two real data sets have been analyzed for illustrative purposes.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"75 - 89"},"PeriodicalIF":0.0,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1580644","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48019427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/01966324.2019.1579123
A. Rabie, Junping Li
SYNOPTIC ABSTRACT This article deals with Bayesian and E-Bayesian (expectation of the Bayesian estimate) estimation methods of the parameter and the reliability function of Burr-X distribution based on a generalized Type-I hybrid censoring scheme. Bayesian and E-Bayesian estimates are obtained under LINEX and squared error loss functions. By applying Markov chain Monte Carlo techniques, Bayesian and E-Bayesian estimates based on a generalized Type-I hybrid censoring scheme are derived. Also, credible intervals for Bayesian and E-Bayesian estimates are computed. Examples of generalized Type-I hybrid censored samples and real data sets are presented for the purpose of illustration. Finally, a comparison between Bayesian and E-Bayesian estimation methods is conducted.
{"title":"E-Bayesian Estimation for Burr-X Distribution Based on Generalized Type-I Hybrid Censoring Scheme","authors":"A. Rabie, Junping Li","doi":"10.1080/01966324.2019.1579123","DOIUrl":"https://doi.org/10.1080/01966324.2019.1579123","url":null,"abstract":"SYNOPTIC ABSTRACT This article deals with Bayesian and E-Bayesian (expectation of the Bayesian estimate) estimation methods of the parameter and the reliability function of Burr-X distribution based on a generalized Type-I hybrid censoring scheme. Bayesian and E-Bayesian estimates are obtained under LINEX and squared error loss functions. By applying Markov chain Monte Carlo techniques, Bayesian and E-Bayesian estimates based on a generalized Type-I hybrid censoring scheme are derived. Also, credible intervals for Bayesian and E-Bayesian estimates are computed. Examples of generalized Type-I hybrid censored samples and real data sets are presented for the purpose of illustration. Finally, a comparison between Bayesian and E-Bayesian estimation methods is conducted.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"41 - 55"},"PeriodicalIF":0.0,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1579123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48294866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-01DOI: 10.1080/01966324.2019.1597794
M. Ghahramani, A. Badamchi Zadeh, M. R. Salehi Rad
SYNOPTIC ABSTRACT The current study examines a queuing system with two incongruent arrivals and two services. In this regard, two types of customers enter the system by a Poisson process and the service times are assumed to have exponential distributions. After the first service is completed, the system may provide feedback to repeat the first service, leave the system, or continue to give the second service. The same policy is utilized for the other kind of customers. The whole stochastic processes involved in the system are considered as independent random variables. A probability generating function is derived for each type and for the system that yield the performance measures. We examine the validity of the results through numerical approaches.
{"title":"Two M/M/1 Queues with Incongruent Arrivals and Services with Random Feedback","authors":"M. Ghahramani, A. Badamchi Zadeh, M. R. Salehi Rad","doi":"10.1080/01966324.2019.1597794","DOIUrl":"https://doi.org/10.1080/01966324.2019.1597794","url":null,"abstract":"SYNOPTIC ABSTRACT The current study examines a queuing system with two incongruent arrivals and two services. In this regard, two types of customers enter the system by a Poisson process and the service times are assumed to have exponential distributions. After the first service is completed, the system may provide feedback to repeat the first service, leave the system, or continue to give the second service. The same policy is utilized for the other kind of customers. The whole stochastic processes involved in the system are considered as independent random variables. A probability generating function is derived for each type and for the system that yield the performance measures. We examine the validity of the results through numerical approaches.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"38 1","pages":"386 - 394"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1597794","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46782878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-30DOI: 10.1080/01966324.2018.1551733
S. Roychowdhury, D. Bhattacharya
SYNOPTIC ABSTRACT In system engineering, numerous efforts have been made for achieving improvement in system performance under a binary set up, where each component, as well as the entire system, has any one of two states; namely, perfect functioning and complete failure. However, there are systems which perform their tasks at various performance levels rather than functioning at only the above two performance levels. These systems are multi-state systems. In these systems, there can be some partially working states or performance levels before the system comes to the state of complete failure. Hence, the need has been felt to develop the procedures for improving the performance of multi-state systems consisting of multi-state components. This article resolves such an issue for a multi-state system using a multi-state component importance measure. The measure developed here is used to assess the impact of individual components on the improvement of system performance. Some basic theory to deal with a homogeneous multi-state coherent system has been developed, and finally, a rule has been derived to improve system performance using the importance measure. The applications of the results have been illustrated through a real-life example.
{"title":"Performance Improvement of a Multi-State Coherent System using Component Importance Measure","authors":"S. Roychowdhury, D. Bhattacharya","doi":"10.1080/01966324.2018.1551733","DOIUrl":"https://doi.org/10.1080/01966324.2018.1551733","url":null,"abstract":"SYNOPTIC ABSTRACT In system engineering, numerous efforts have been made for achieving improvement in system performance under a binary set up, where each component, as well as the entire system, has any one of two states; namely, perfect functioning and complete failure. However, there are systems which perform their tasks at various performance levels rather than functioning at only the above two performance levels. These systems are multi-state systems. In these systems, there can be some partially working states or performance levels before the system comes to the state of complete failure. Hence, the need has been felt to develop the procedures for improving the performance of multi-state systems consisting of multi-state components. This article resolves such an issue for a multi-state system using a multi-state component importance measure. The measure developed here is used to assess the impact of individual components on the improvement of system performance. Some basic theory to deal with a homogeneous multi-state coherent system has been developed, and finally, a rule has been derived to improve system performance using the importance measure. The applications of the results have been illustrated through a real-life example.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"38 1","pages":"312 - 324"},"PeriodicalIF":0.0,"publicationDate":"2019-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2018.1551733","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43816343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}