Pub Date : 2020-04-02DOI: 10.1080/01966324.2019.1605320
Muhammad Shuaib Khan, R. King, I. Hudson
SYNOPTIC ABSTRACT This article investigates the potential usefulness of the three-parameter transmuted Burr type X (TBX) distribution for modeling reliability data, and explore its structural properties using simulation. Explicit expressions are derived for moments, incomplete moments, entropies, and mean deviation. The method of maximum likelihood is used for estimating the model parameters. We conduct Monte Carlo simulations, which are used to examine the relative performance of the estimators using MLE in terms of bias and mean square errors. A location-scale regression model based on the log-TBX distribution is proposed for modeling lifetime data. Use of this family of distributions is illustrated for fatigue fracture data and multiple myeloma patient’s data.
{"title":"Transmuted Burr Type X Distribution with Covariates Regression Modeling to Analyze Reliability Data","authors":"Muhammad Shuaib Khan, R. King, I. Hudson","doi":"10.1080/01966324.2019.1605320","DOIUrl":"https://doi.org/10.1080/01966324.2019.1605320","url":null,"abstract":"SYNOPTIC ABSTRACT This article investigates the potential usefulness of the three-parameter transmuted Burr type X (TBX) distribution for modeling reliability data, and explore its structural properties using simulation. Explicit expressions are derived for moments, incomplete moments, entropies, and mean deviation. The method of maximum likelihood is used for estimating the model parameters. We conduct Monte Carlo simulations, which are used to examine the relative performance of the estimators using MLE in terms of bias and mean square errors. A location-scale regression model based on the log-TBX distribution is proposed for modeling lifetime data. Use of this family of distributions is illustrated for fatigue fracture data and multiple myeloma patient’s data.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"121 - 99"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1605320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46626659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/01966324.2019.1673266
Deepak Singh, Somesh Kumar
SYNOPTIC ABSTRACT In this paper, a new class of dependent Bernoulli random variables is defined. Here the probability of success at a given trial is a function of the number of successes and probabilities of successes in the previous trials. The moment structure for this model is derived. Further, the strong law of large numbers, the central limit theorem and the law of iterated logarithm are established under a condition that the success probabilities be monotone. Simulations are carried out to demonstrate the law of large numbers and the central limit theorem.
{"title":"Limit Theorems for Sums of Dependent and Non-Identical Bernoulli Random Variables","authors":"Deepak Singh, Somesh Kumar","doi":"10.1080/01966324.2019.1673266","DOIUrl":"https://doi.org/10.1080/01966324.2019.1673266","url":null,"abstract":"SYNOPTIC ABSTRACT In this paper, a new class of dependent Bernoulli random variables is defined. Here the probability of success at a given trial is a function of the number of successes and probabilities of successes in the previous trials. The moment structure for this model is derived. Further, the strong law of large numbers, the central limit theorem and the law of iterated logarithm are established under a condition that the success probabilities be monotone. Simulations are carried out to demonstrate the law of large numbers and the central limit theorem.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"150 - 165"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1673266","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48967972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/01966324.2019.1638853
Shu-Yi Yu
SYNOPTIC ABSTRACT In this article, we revisit an interesting variant of the fault-tolerant facility placement problem with rejection (FTFPWR, for short). In the FTFPWR, we are given a set of potential locations to open facilities, and a set of customers to be connected to a number of facilities to satisfy their demands. At each location we are allowed to open any number of facilities, each with its corresponding opening cost. Each customer has an integral demand which specifies the number of open facilities that it should be connected to. Some facilities that a customer is connected to could be located at the same location, as long as they are all different and open. For each location and customer pair we are also given a distance between them that represents the cost to connect one unit of demand from the customer to a facility at its location. We assume the distance function to be metric. The task is to choose a subset of locations to open facilities, and choose a subset of customers to connect their demands to the open facilities at locations with the remaining customers to be rejected by paying the rejection costs such that the sum of the facility opening costs, the connection costs, and the rejection costs is minimized. For the FTFPWR, the performance ratio of currently best approximation algorithm is 2.515. By introducing a randomized rounding approach and the derandomizing technique, we propose an improved approximation algorithm with the performance ratio of 2.07.
{"title":"Improved Approximation Algorithm for the Fault-Tolerant Facility Placement Problem with Rejection","authors":"Shu-Yi Yu","doi":"10.1080/01966324.2019.1638853","DOIUrl":"https://doi.org/10.1080/01966324.2019.1638853","url":null,"abstract":"SYNOPTIC ABSTRACT\u0000 In this article, we revisit an interesting variant of the fault-tolerant facility placement problem with rejection (FTFPWR, for short). In the FTFPWR, we are given a set of potential locations to open facilities, and a set of customers to be connected to a number of facilities to satisfy their demands. At each location we are allowed to open any number of facilities, each with its corresponding opening cost. Each customer has an integral demand which specifies the number of open facilities that it should be connected to. Some facilities that a customer is connected to could be located at the same location, as long as they are all different and open. For each location and customer pair we are also given a distance between them that represents the cost to connect one unit of demand from the customer to a facility at its location. We assume the distance function to be metric. The task is to choose a subset of locations to open facilities, and choose a subset of customers to connect their demands to the open facilities at locations with the remaining customers to be rejected by paying the rejection costs such that the sum of the facility opening costs, the connection costs, and the rejection costs is minimized. For the FTFPWR, the performance ratio of currently best approximation algorithm is 2.515. By introducing a randomized rounding approach and the derandomizing technique, we propose an improved approximation algorithm with the performance ratio of 2.07.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"122 - 128"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1638853","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47440474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/01966324.2019.1691095
P. Mwanakatwe, Lixin Song, Xiaoguang Wang
Abstract In this article, we analyze the optimal contributions and investment strategies problem for a defined benefit pension fund. The pension fund manager allocates the capital in riskless and risky assets and the dynamics of the risky asset price follows the Hull and White stochastic volatility model. The choice of the Hull and White model is due to its mean-reverting property. The stochastic dynamic programing principle is used to derive the Hamilton-Jacob-Bellman (HJB) equation. Due to the complications of the HJB when finding the closed form solution, we use the Legendre transform and dual theory to transform the primary problem into a dual one. Furthermore, we obtain the closed form solutions for optimal strategies for the logarithm utility functions by variable transformation technique. Finally, a numerical example is conducted to analyze the effects of parameters in the model and provide some economic implications. To conclude, the Hull and White stochastic volatility model shown to have a substantial impact on the optimal investment strategies and thus can be exploited appropriately to improve the final wealth of the pension fund.
摘要本文分析了固定收益养老基金的最优缴费率和投资策略问题。养老基金经理将资金配置在无风险资产和风险资产上,风险资产价格的动态遵循Hull and White随机波动模型。选择赫尔和怀特模型是由于其均值回归特性。利用随机动态规划原理推导了Hamilton-Jacob-Bellman (HJB)方程。由于HJB在求闭形式解时的复杂性,我们利用勒让德变换和对偶理论将原问题转化为对偶问题。进一步,利用变量变换技术,得到对数效用函数最优策略的闭型解。最后,通过数值算例分析了模型中参数的影响,并给出了一些经济意义。综上所述,Hull and White随机波动率模型对最优投资策略具有重大影响,因此可以适当地利用该模型来提高养老基金的最终财富。
{"title":"Management Strategies for the Defined Benefit Pension Fund Under Stochastic Framework","authors":"P. Mwanakatwe, Lixin Song, Xiaoguang Wang","doi":"10.1080/01966324.2019.1691095","DOIUrl":"https://doi.org/10.1080/01966324.2019.1691095","url":null,"abstract":"Abstract In this article, we analyze the optimal contributions and investment strategies problem for a defined benefit pension fund. The pension fund manager allocates the capital in riskless and risky assets and the dynamics of the risky asset price follows the Hull and White stochastic volatility model. The choice of the Hull and White model is due to its mean-reverting property. The stochastic dynamic programing principle is used to derive the Hamilton-Jacob-Bellman (HJB) equation. Due to the complications of the HJB when finding the closed form solution, we use the Legendre transform and dual theory to transform the primary problem into a dual one. Furthermore, we obtain the closed form solutions for optimal strategies for the logarithm utility functions by variable transformation technique. Finally, a numerical example is conducted to analyze the effects of parameters in the model and provide some economic implications. To conclude, the Hull and White stochastic volatility model shown to have a substantial impact on the optimal investment strategies and thus can be exploited appropriately to improve the final wealth of the pension fund.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"182 - 197"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1691095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41548497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/01966324.2019.1679301
Hokwon A. Cho, Zhou Wang
Synoptic Abstract A sequential method is presented for determining confidence intervals of fixed-width and corresponding optimal sample sizes for the risk ratio of probabilities of the two independent binomial variates. In general, since the ratio estimators are biased and asymmetrical, corrections must be made when they are used in practice. We suggest to use a bias-correction term for modification to the maximum likelihood estimator (MLE) to develop the procedure. In addition, we study the following desirable properties of the estimator: Unbiasedness, efficiency in variance, and normality. First-order asymptotic expansions are obtained to investigate large-sample properties of the proposed procedure. Monte Carlo experiment is carried out for various scenarios of samples for examining the finite sample behavior. Through illustrations, we compare these performance of the proposed methods, Wald-based confidence intervals with the likelihood-based confidence intervals in light of invariance, length and sample sizes.
{"title":"On Fixed-Width Confidence Limits for the Risk Ratio with Sequential Sampling","authors":"Hokwon A. Cho, Zhou Wang","doi":"10.1080/01966324.2019.1679301","DOIUrl":"https://doi.org/10.1080/01966324.2019.1679301","url":null,"abstract":"Synoptic Abstract A sequential method is presented for determining confidence intervals of fixed-width and corresponding optimal sample sizes for the risk ratio of probabilities of the two independent binomial variates. In general, since the ratio estimators are biased and asymmetrical, corrections must be made when they are used in practice. We suggest to use a bias-correction term for modification to the maximum likelihood estimator (MLE) to develop the procedure. In addition, we study the following desirable properties of the estimator: Unbiasedness, efficiency in variance, and normality. First-order asymptotic expansions are obtained to investigate large-sample properties of the proposed procedure. Monte Carlo experiment is carried out for various scenarios of samples for examining the finite sample behavior. Through illustrations, we compare these performance of the proposed methods, Wald-based confidence intervals with the likelihood-based confidence intervals in light of invariance, length and sample sizes.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"166 - 181"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1679301","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45284817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/01966324.2019.1664957
Bahman Tarvirdizade, N. Nematollahi
Synoptic Abstract The problems of classical and Bayesian estimation of the parameters of the power-exponential hazard rate distribution (P-EHRD) based on record values and the prediction of future record values are considered. The parameters of P-EHRD are estimated by the maximum likelihood and the least squares methods, and the Bayes estimates are obtained by the Metropolis-Hastings method under the squared error loss and LINEX loss functions. Also, an asymptotic confidence interval, two bootstrap confidence intervals and the highest posterior density (HPD) credible interval for the unknown parameters are constructed. The problem of predicting the future record values from the P-EHRD based on the past record values is considered using the maximum likelihood and Bayesian approaches. To investigate and compare the performance of the different proposed methods, a Monte Carlo simulation study is conducted. Finally, an example is presented to illustrate the estimation and prediction procedures.
{"title":"Estimation and Prediction for the Power-Exponential Hazard Rate Distribution Based on Record Data","authors":"Bahman Tarvirdizade, N. Nematollahi","doi":"10.1080/01966324.2019.1664957","DOIUrl":"https://doi.org/10.1080/01966324.2019.1664957","url":null,"abstract":"Synoptic Abstract The problems of classical and Bayesian estimation of the parameters of the power-exponential hazard rate distribution (P-EHRD) based on record values and the prediction of future record values are considered. The parameters of P-EHRD are estimated by the maximum likelihood and the least squares methods, and the Bayes estimates are obtained by the Metropolis-Hastings method under the squared error loss and LINEX loss functions. Also, an asymptotic confidence interval, two bootstrap confidence intervals and the highest posterior density (HPD) credible interval for the unknown parameters are constructed. The problem of predicting the future record values from the P-EHRD based on the past record values is considered using the maximum likelihood and Bayesian approaches. To investigate and compare the performance of the different proposed methods, a Monte Carlo simulation study is conducted. Finally, an example is presented to illustrate the estimation and prediction procedures.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"129 - 149"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2019.1664957","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45711963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-02DOI: 10.1080/01966324.2020.1730274
Swati Agarwal, S. Sharma
Abstract In practical world applications, the time minimizing transportation problem provides a powerful framework for determining better ways for timely delivery of goods to consumers. In this article, we propose a shootout method to determine the optimum time of transportation for the time minimizing transportation problem with mixed constraints. The proposed problem is a variant of the time minimizing transportation problem in which constraints are of mixed nature. The cells of the transportation table with decreasing order of time are avoided for allocation one by one until feasibility is maintained. The method is structured in the form of an algorithm and computationally tested on MATLAB through problems of various sizes. It solves the time minimizing transportation problem with equality constraints also.
{"title":"A Shootout Method for Time Minimizing Transportation Problem with Mixed Constraints","authors":"Swati Agarwal, S. Sharma","doi":"10.1080/01966324.2020.1730274","DOIUrl":"https://doi.org/10.1080/01966324.2020.1730274","url":null,"abstract":"Abstract In practical world applications, the time minimizing transportation problem provides a powerful framework for determining better ways for timely delivery of goods to consumers. In this article, we propose a shootout method to determine the optimum time of transportation for the time minimizing transportation problem with mixed constraints. The proposed problem is a variant of the time minimizing transportation problem in which constraints are of mixed nature. The cells of the transportation table with decreasing order of time are avoided for allocation one by one until feasibility is maintained. The method is structured in the form of an algorithm and computationally tested on MATLAB through problems of various sizes. It solves the time minimizing transportation problem with equality constraints also.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"299 - 314"},"PeriodicalIF":0.0,"publicationDate":"2020-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1730274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43736374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-26DOI: 10.1080/01966324.2020.1728452
Miao-Sheng Chen, M. Li, hsien-bin Wang
Abstract Quality goods are high-priced products. Consumers often wish to own them to satisfy their specific preference. During initiating periods, the pricing strategy of a firm is often based on prestige pricing. This study developed a model to optimize their profits through the price cultivation process. A modified Bass Diffusion Model is adopted by replacing accumulated sales with potential consumers who estimate the source of diffusion power. The distribution density function of ceiling price is used which consumers are willing to pay to estimate the purchase percentage among potential consumers and to forecast selling quantity. Finally, the present values of cash flows were calculated to determine the optimal price cultivation period. The study theoretically found that, with higher cost of capital, the small-and-medium enterprises would gain their optimal benefits through price cultivation marketing campaign. Whereas, with the lower cost of capital, the large-scale firms would not apply price-cut promotion to achieve their optimal profit.
{"title":"Modeling Price Cultivation for Major Quality Goods","authors":"Miao-Sheng Chen, M. Li, hsien-bin Wang","doi":"10.1080/01966324.2020.1728452","DOIUrl":"https://doi.org/10.1080/01966324.2020.1728452","url":null,"abstract":"Abstract Quality goods are high-priced products. Consumers often wish to own them to satisfy their specific preference. During initiating periods, the pricing strategy of a firm is often based on prestige pricing. This study developed a model to optimize their profits through the price cultivation process. A modified Bass Diffusion Model is adopted by replacing accumulated sales with potential consumers who estimate the source of diffusion power. The distribution density function of ceiling price is used which consumers are willing to pay to estimate the purchase percentage among potential consumers and to forecast selling quantity. Finally, the present values of cash flows were calculated to determine the optimal price cultivation period. The study theoretically found that, with higher cost of capital, the small-and-medium enterprises would gain their optimal benefits through price cultivation marketing campaign. Whereas, with the lower cost of capital, the large-scale firms would not apply price-cut promotion to achieve their optimal profit.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"252 - 269"},"PeriodicalIF":0.0,"publicationDate":"2020-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1728452","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44459546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-24DOI: 10.1080/01966324.2020.1728453
Sajid Ali, S. Dey, M. H. Tahir, M. Mansoor
Abstract The logistic exponential (LE) distribution is the only two parameter distribution that exhibits five hazard rate shapes such as constant, increasing, decreasing, bathtub, and upside-down bathtub. Due to its practical utility, this article considers ten frequentist methods of estimation, namely, maximum likelihood, least square, weighted least square, percentiles, maximum and minimum spacing distance, and variant of the method of the minimum distances for the LE parameters. A Monte Carlo simulation study is carried out to compare the performance of these estimation methods. Furthermore, Bayesian estimation under the squared error loss function assuming gamma priors for both parameters of the LE distribution is also discussed. The posterior summaries are obtained by using a Markov chain Monte Carlo algorithm. To show the practical superiority, a real-life data is analyzed to show that the LE model performs better than the well-known two-parameter models, like, exponentiated-exponential, Nadarajah–Haghighi, Birnbaum–Saunders, Weibull, Gamma, inverse Gaussian, and log-normal.
{"title":"Two-Parameter Logistic-Exponential Distribution: Some New Properties and Estimation Methods","authors":"Sajid Ali, S. Dey, M. H. Tahir, M. Mansoor","doi":"10.1080/01966324.2020.1728453","DOIUrl":"https://doi.org/10.1080/01966324.2020.1728453","url":null,"abstract":"Abstract The logistic exponential (LE) distribution is the only two parameter distribution that exhibits five hazard rate shapes such as constant, increasing, decreasing, bathtub, and upside-down bathtub. Due to its practical utility, this article considers ten frequentist methods of estimation, namely, maximum likelihood, least square, weighted least square, percentiles, maximum and minimum spacing distance, and variant of the method of the minimum distances for the LE parameters. A Monte Carlo simulation study is carried out to compare the performance of these estimation methods. Furthermore, Bayesian estimation under the squared error loss function assuming gamma priors for both parameters of the LE distribution is also discussed. The posterior summaries are obtained by using a Markov chain Monte Carlo algorithm. To show the practical superiority, a real-life data is analyzed to show that the LE model performs better than the well-known two-parameter models, like, exponentiated-exponential, Nadarajah–Haghighi, Birnbaum–Saunders, Weibull, Gamma, inverse Gaussian, and log-normal.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"270 - 298"},"PeriodicalIF":0.0,"publicationDate":"2020-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1728453","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49033803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-17DOI: 10.1080/01966324.2020.1722299
Ajit Chaturvedi, Ananya Malhotra
Abstract In this article, a study on the stress-strength parameter based on lower record values from one parameter proportional reversed hazard family (PRHF) has been conducted. The classical and Bayesian results of Khan and Arshad (UMVU Estimation of Reliability Function and Stress-Strength Reliability from Proportional Reversed Hazard Family Based on Lower Records. American Journal of Mathematical and Management Sciences, 35(2), 171–181) and Condino et al. (Likelihood and Bayesian estimation of P(Y < X) using lower record values from a general class of distributions. Statistical Papers), when the strength and stress variables belong to different family of distributions from PRHF have been generalized. Uniformly minimum variance unbiased estimator (UMVUE), maximum likelihood estimator (MLE) and Bayes estimator (BE) are obtained for the powers of the parameter and reliability functions and The estimators of three parametric functions, namely, powers of parameter, and are interrelated, whereas, in the literature, researchers have handled the three estimation problems separately. Moreover, it is has been shown that the expressions for and are not required to estimate them. In this article, the technique of obtaining estimators of and is simpler as it does not require Rao-Blackwellization. Simulation studies have been performed for analyzing the behavior of the proposed estimators. An example using real data has also been considered as an illustration.
{"title":"On Estimation of Stress-Strength Reliability Using Lower Record Values from Proportional Reversed Hazard Family","authors":"Ajit Chaturvedi, Ananya Malhotra","doi":"10.1080/01966324.2020.1722299","DOIUrl":"https://doi.org/10.1080/01966324.2020.1722299","url":null,"abstract":"Abstract In this article, a study on the stress-strength parameter based on lower record values from one parameter proportional reversed hazard family (PRHF) has been conducted. The classical and Bayesian results of Khan and Arshad (UMVU Estimation of Reliability Function and Stress-Strength Reliability from Proportional Reversed Hazard Family Based on Lower Records. American Journal of Mathematical and Management Sciences, 35(2), 171–181) and Condino et al. (Likelihood and Bayesian estimation of P(Y < X) using lower record values from a general class of distributions. Statistical Papers), when the strength and stress variables belong to different family of distributions from PRHF have been generalized. Uniformly minimum variance unbiased estimator (UMVUE), maximum likelihood estimator (MLE) and Bayes estimator (BE) are obtained for the powers of the parameter and reliability functions and The estimators of three parametric functions, namely, powers of parameter, and are interrelated, whereas, in the literature, researchers have handled the three estimation problems separately. Moreover, it is has been shown that the expressions for and are not required to estimate them. In this article, the technique of obtaining estimators of and is simpler as it does not require Rao-Blackwellization. Simulation studies have been performed for analyzing the behavior of the proposed estimators. An example using real data has also been considered as an illustration.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"39 1","pages":"234 - 251"},"PeriodicalIF":0.0,"publicationDate":"2020-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1722299","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45359556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}