Pub Date : 2021-08-27DOI: 10.1080/01966324.2021.1960226
Chaofan Li, Wenchang Luo
Abstract This article studies the problem of minimizing the total energy consumed in a heterogeneous wireless sensor data gathering network with data compression. In a wireless sensor data gathering network, a set of sensors is used to collect data and all the data are required to be transmitted to a single base station. Whether the base station is working in data receiving or idle mode, it consumes energy. To reduce the data transmission time, each sensor has the option to compress its collected data to decrease the original size before sending the data to the base station. However, compressing data takes some time delaying the data transmission starting time and also consuming energy. The task is to choose which sensors should compress their data and determine the data transmission order between the sensors and the base station with the goal of minimizing the total energy consumed. We prove that the studied problem is NP-hard, and propose a pseudo-polynomial dynamic programming exact algorithm. Furthermore, we present an approximation algorithm with the performance ratio that depends on the given energy consuming parameters for each unit time in different energy consuming activities.
{"title":"Exact and Approximation Algorithms for Minimizing Energy in Wireless Sensor Data Gathering Network with Data Compression","authors":"Chaofan Li, Wenchang Luo","doi":"10.1080/01966324.2021.1960226","DOIUrl":"https://doi.org/10.1080/01966324.2021.1960226","url":null,"abstract":"Abstract This article studies the problem of minimizing the total energy consumed in a heterogeneous wireless sensor data gathering network with data compression. In a wireless sensor data gathering network, a set of sensors is used to collect data and all the data are required to be transmitted to a single base station. Whether the base station is working in data receiving or idle mode, it consumes energy. To reduce the data transmission time, each sensor has the option to compress its collected data to decrease the original size before sending the data to the base station. However, compressing data takes some time delaying the data transmission starting time and also consuming energy. The task is to choose which sensors should compress their data and determine the data transmission order between the sensors and the base station with the goal of minimizing the total energy consumed. We prove that the studied problem is NP-hard, and propose a pseudo-polynomial dynamic programming exact algorithm. Furthermore, we present an approximation algorithm with the performance ratio that depends on the given energy consuming parameters for each unit time in different energy consuming activities.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"305 - 315"},"PeriodicalIF":0.0,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44214786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-26DOI: 10.1080/01966324.2021.1966350
Neeraj Joshi, Sudeep R. Bapat, A. Shukla
Abstract In this paper, we develop accelerated sequential and stage procedures for estimating the mean of an inverse Gaussian distribution when the population coefficient of variation is known. The problems of minimum risk and bounded risk point estimation are handled. The estimation procedures are developed under an interesting weighted squared-error loss function and our aim is to control the associated risk functions. In spite of the usual estimator, i.e., the sample mean, Searls (1964) estimator is utilized for the purpose of estimation. Second-order asymptotics are obtained for the expected sample size and risk associated with the proposed multi-stage procedures. Further, it is established that the Searls’ estimator dominates the usual estimator (sample mean) under the proposed procedures. Extensive simulation analysis is carried out in support of the encouraging performances of the proposed methodologies and a real data example is also provided for illustrative purposes.
{"title":"Multi-Stage Estimation Methodologies for an Inverse Gaussian Mean with Known Coefficient of Variation","authors":"Neeraj Joshi, Sudeep R. Bapat, A. Shukla","doi":"10.1080/01966324.2021.1966350","DOIUrl":"https://doi.org/10.1080/01966324.2021.1966350","url":null,"abstract":"Abstract In this paper, we develop accelerated sequential and stage procedures for estimating the mean of an inverse Gaussian distribution when the population coefficient of variation is known. The problems of minimum risk and bounded risk point estimation are handled. The estimation procedures are developed under an interesting weighted squared-error loss function and our aim is to control the associated risk functions. In spite of the usual estimator, i.e., the sample mean, Searls (1964) estimator is utilized for the purpose of estimation. Second-order asymptotics are obtained for the expected sample size and risk associated with the proposed multi-stage procedures. Further, it is established that the Searls’ estimator dominates the usual estimator (sample mean) under the proposed procedures. Extensive simulation analysis is carried out in support of the encouraging performances of the proposed methodologies and a real data example is also provided for illustrative purposes.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"334 - 349"},"PeriodicalIF":0.0,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49246389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-23DOI: 10.1080/01966324.2020.1730273
Jishu Jana, Sankar Kumar Roy
Abstract Soft set theory has emerged recently as a new mathematical tool to handle uncertainty. Sometimes decision makers are not sure about the decision-making criteria, where soft set theory provides an idea to deal with such uncertainties. Multi-criteria decision making (MCDM) involves choosing the best from several alternatives. MCDM methods such as TOPSIS and VIKOR depend on an aggregating function for presenting “closeness to the ideal” which arises due to the compromise solution. The VIKOR method of compromise ranking describes a compromise solution, providing a maximum for the “maximizing player” and minimum for the “opponent”, which is an effective approach in an MCDM game. TOPSIS method presents a solution with the shortest distance to the positive ideal solution (PIS) and largest distance from the negative ideal solution (NIS). Also, hesitant fuzzy soft set is an appropriate tool to tackle the imprecise parameters introduced in MCDM problems by the decision maker (DM). In this paper, we extend the VIKOR and TOPSIS methods for solving MCDM game problems with hesitant fuzzy soft payoffs to determine the optimal strategies. Finally, a numerical example is incorporated to verify the extended VIKOR approach and the results are compared with those for the TOPSIS method. The paper ends with conclusions and outlooks.
{"title":"Soft Matrix Game: A Hesitant Fuzzy MCDM Approach","authors":"Jishu Jana, Sankar Kumar Roy","doi":"10.1080/01966324.2020.1730273","DOIUrl":"https://doi.org/10.1080/01966324.2020.1730273","url":null,"abstract":"Abstract Soft set theory has emerged recently as a new mathematical tool to handle uncertainty. Sometimes decision makers are not sure about the decision-making criteria, where soft set theory provides an idea to deal with such uncertainties. Multi-criteria decision making (MCDM) involves choosing the best from several alternatives. MCDM methods such as TOPSIS and VIKOR depend on an aggregating function for presenting “closeness to the ideal” which arises due to the compromise solution. The VIKOR method of compromise ranking describes a compromise solution, providing a maximum for the “maximizing player” and minimum for the “opponent”, which is an effective approach in an MCDM game. TOPSIS method presents a solution with the shortest distance to the positive ideal solution (PIS) and largest distance from the negative ideal solution (NIS). Also, hesitant fuzzy soft set is an appropriate tool to tackle the imprecise parameters introduced in MCDM problems by the decision maker (DM). In this paper, we extend the VIKOR and TOPSIS methods for solving MCDM game problems with hesitant fuzzy soft payoffs to determine the optimal strategies. Finally, a numerical example is incorporated to verify the extended VIKOR approach and the results are compared with those for the TOPSIS method. The paper ends with conclusions and outlooks.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"107 - 119"},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41432420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-23DOI: 10.1080/01966324.2021.1939206
Medha Gupta, P. Goyal
Abstract Multispectral images have been found useful for various applications such as remote sensing, medical imaging, military surveillance, vision inspection for food quality control, etc. but the high costs of multispectral cameras limit their usage. Low cost multispectral cameras can be developed using a single sensor multispectral filter array (MSFA) and a demosaicing method to reconstruct the complete image from under sampled multispectral image data acquired using a single sensor MSFA imaging system. In this paper, we present a new demosaicing method based on the derivative operations for the multi-spectral images. To design MSFA patterns, binary tree method is often used and the band sequence is chosen such that the middle band has a higher probability of appearance in MSFA pattern. In the proposed method, first the middle spectral band pixel values are estimated and then it is used to compute derivatives that help estimate other spectral band pixel values. Unlike many recently developed demosaicing methods that are applicable to only specific band size multispectral images, the proposed method is generic and can be applied to obtain multispectral images for any number of spectral bands. The TokyoTech dataset and CAVE dataset of multispectral images are used for the evaluation purpose, and the experimental results show that the proposed method outperforms currently best known generic multispectral demosaicing method, namely binary tree edge sensing (BTES) method on both datasets and for different band-size multispectral images.
{"title":"Demosaicing Method for Multispectral Images Using Derivative Operations","authors":"Medha Gupta, P. Goyal","doi":"10.1080/01966324.2021.1939206","DOIUrl":"https://doi.org/10.1080/01966324.2021.1939206","url":null,"abstract":"Abstract Multispectral images have been found useful for various applications such as remote sensing, medical imaging, military surveillance, vision inspection for food quality control, etc. but the high costs of multispectral cameras limit their usage. Low cost multispectral cameras can be developed using a single sensor multispectral filter array (MSFA) and a demosaicing method to reconstruct the complete image from under sampled multispectral image data acquired using a single sensor MSFA imaging system. In this paper, we present a new demosaicing method based on the derivative operations for the multi-spectral images. To design MSFA patterns, binary tree method is often used and the band sequence is chosen such that the middle band has a higher probability of appearance in MSFA pattern. In the proposed method, first the middle spectral band pixel values are estimated and then it is used to compute derivatives that help estimate other spectral band pixel values. Unlike many recently developed demosaicing methods that are applicable to only specific band size multispectral images, the proposed method is generic and can be applied to obtain multispectral images for any number of spectral bands. The TokyoTech dataset and CAVE dataset of multispectral images are used for the evaluation purpose, and the experimental results show that the proposed method outperforms currently best known generic multispectral demosaicing method, namely binary tree edge sensing (BTES) method on both datasets and for different band-size multispectral images.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"163 - 176"},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2021.1939206","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43575668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-23DOI: 10.1080/01966324.2021.1933660
G. Arora, V. Joshi
Abstract In this paper, a refined form of the differential quadrature method is proposed to compute the numerical solution of one and two-dimensional convection-diffusion Fisher’s equation. The cubic trigonometric B-spline basis functions are applied in the differential quadrature method in a modified form to obtain the weighting coefficients. The application of the method reduces nonlinear Fisher’s partial differential equation into a system of ordinary differential equations which can be solved by applying the Runge-Kutta method. Six numerical test problems of Fisher’s equation are analyzed numerically to establish the efficiency of the proposed method. The stability of the method is also discussed using the matrix method.
{"title":"A Computational Approach for One and Two Dimensional Fisher’s Equation Using Quadrature Technique","authors":"G. Arora, V. Joshi","doi":"10.1080/01966324.2021.1933660","DOIUrl":"https://doi.org/10.1080/01966324.2021.1933660","url":null,"abstract":"Abstract In this paper, a refined form of the differential quadrature method is proposed to compute the numerical solution of one and two-dimensional convection-diffusion Fisher’s equation. The cubic trigonometric B-spline basis functions are applied in the differential quadrature method in a modified form to obtain the weighting coefficients. The application of the method reduces nonlinear Fisher’s partial differential equation into a system of ordinary differential equations which can be solved by applying the Runge-Kutta method. Six numerical test problems of Fisher’s equation are analyzed numerically to establish the efficiency of the proposed method. The stability of the method is also discussed using the matrix method.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"145 - 162"},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2021.1933660","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47369844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-23DOI: 10.1080/01966324.2021.1950086
Mohammad Tariquel Islam, K. Das
Abstract The study investigates and develops the ability of the extreme value theory (EVT) to predict bitcoin return. EVT is used to deal with rare but extreme events, such as severe losses or excessive damages. It is being used as a powerful statistical tool in various disciplines, including finance, engineering, environmental science, and actuarial science. As the largest among all cryptocurrencies in existence, bitcoin’s behavior is primarily characterized by great volatility. Predicting bitcoin return is complex and important, primarily because of the extreme nature of its return. There is not enough substantial research involving EVT in bitcoin analysis. This study has three objectives. First, confirming the extreme nature of bitcoin return by various statistical tests; second, modeling the bitcoin return using two different EVT approaches (block maxima approach and peak over threshold approach); and third, assessing uncertainties by predicting bitcoin return levels for 5-, 10-, 20-, 50-, and 100-years with a 95% confidence interval using both of these methods. These results could certainly serve policymakers and investors, as these return levels can be useful in characterizing bearish and bullish trends and predicting the same. Moreover, these can serve as starting points for future studies regarding the stationary and non-stationary properties of bitcoin return.
{"title":"Predicting Bitcoin Return Using Extreme Value Theory","authors":"Mohammad Tariquel Islam, K. Das","doi":"10.1080/01966324.2021.1950086","DOIUrl":"https://doi.org/10.1080/01966324.2021.1950086","url":null,"abstract":"Abstract The study investigates and develops the ability of the extreme value theory (EVT) to predict bitcoin return. EVT is used to deal with rare but extreme events, such as severe losses or excessive damages. It is being used as a powerful statistical tool in various disciplines, including finance, engineering, environmental science, and actuarial science. As the largest among all cryptocurrencies in existence, bitcoin’s behavior is primarily characterized by great volatility. Predicting bitcoin return is complex and important, primarily because of the extreme nature of its return. There is not enough substantial research involving EVT in bitcoin analysis. This study has three objectives. First, confirming the extreme nature of bitcoin return by various statistical tests; second, modeling the bitcoin return using two different EVT approaches (block maxima approach and peak over threshold approach); and third, assessing uncertainties by predicting bitcoin return levels for 5-, 10-, 20-, 50-, and 100-years with a 95% confidence interval using both of these methods. These results could certainly serve policymakers and investors, as these return levels can be useful in characterizing bearish and bullish trends and predicting the same. Moreover, these can serve as starting points for future studies regarding the stationary and non-stationary properties of bitcoin return.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"177 - 187"},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2021.1950086","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46606379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-12DOI: 10.1080/01966324.2021.1960225
A. Sadeghkhani
Abstract The purpose of this paper is two folds. First, to introduce a multiple inflated version of the Conway–Maxwell–Poisson model, that can be used flexibly to model count data when some values have high frequency along with over– or under–dispersion. Also, this model includes Poisson, Conway–Maxwell–Poisson (COMP), zero–inflated Poisson (ZIP), multiple–inflated Poisson, and zero–inflated Conway–Maxwell–Poisson (ZICOMP). Second, to estimate the future distribution from the multiple inflated Conway–Maxwell–Poisson model under the Kullback Leibler difference (loss) function. This model is fitted to the number of penalties scored in the Premier League’s 2019–20 season and its future distribution using Bayes and plug–in methods is estimated.
{"title":"K 1 K 2–Inflated Conway–Maxwell–Poisson Model: Bayesian Predictive Modeling with an Application in Soccer Matches","authors":"A. Sadeghkhani","doi":"10.1080/01966324.2021.1960225","DOIUrl":"https://doi.org/10.1080/01966324.2021.1960225","url":null,"abstract":"Abstract The purpose of this paper is two folds. First, to introduce a multiple inflated version of the Conway–Maxwell–Poisson model, that can be used flexibly to model count data when some values have high frequency along with over– or under–dispersion. Also, this model includes Poisson, Conway–Maxwell–Poisson (COMP), zero–inflated Poisson (ZIP), multiple–inflated Poisson, and zero–inflated Conway–Maxwell–Poisson (ZICOMP). Second, to estimate the future distribution from the multiple inflated Conway–Maxwell–Poisson model under the Kullback Leibler difference (loss) function. This model is fitted to the number of penalties scored in the Premier League’s 2019–20 season and its future distribution using Bayes and plug–in methods is estimated.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"295 - 304"},"PeriodicalIF":0.0,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47029122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-09DOI: 10.1080/01966324.2021.1959469
Matinee Sudsawat, N. Pal
Abstract A two-parameter generalized half normal distribution (2 P-GHND) is gaining attention lately due to its flexibility over other popular distributions on the positive side of the real line. Unlike gamma, lognormal or inverse Gaussian distributions, 2 P-GHND can be either negatively or positively skewed depending on its shape parameter, a property similar to Weibull distribution. In this work we address two inferential problems related to 2 P-GHND: (a) prove analytically the existence and uniqueness of the MLE of the model parameters attained through differentiation of the log-likelihood function; and (b) consider the hypothesis testing on the mean of the distribution where it is shown that a parametric bootstrap (PB) method based on the likelihood ratio test (LRT) statistic works far better than the other asymptotic tests for small to moderate sample sizes. Extensive simulation results have been provided to support this observation.
{"title":"Some Inferential Results on a Two Parameter Generalized Half Normal Distribution","authors":"Matinee Sudsawat, N. Pal","doi":"10.1080/01966324.2021.1959469","DOIUrl":"https://doi.org/10.1080/01966324.2021.1959469","url":null,"abstract":"Abstract A two-parameter generalized half normal distribution (2 P-GHND) is gaining attention lately due to its flexibility over other popular distributions on the positive side of the real line. Unlike gamma, lognormal or inverse Gaussian distributions, 2 P-GHND can be either negatively or positively skewed depending on its shape parameter, a property similar to Weibull distribution. In this work we address two inferential problems related to 2 P-GHND: (a) prove analytically the existence and uniqueness of the MLE of the model parameters attained through differentiation of the log-likelihood function; and (b) consider the hypothesis testing on the mean of the distribution where it is shown that a parametric bootstrap (PB) method based on the likelihood ratio test (LRT) statistic works far better than the other asymptotic tests for small to moderate sample sizes. Extensive simulation results have been provided to support this observation.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"278 - 294"},"PeriodicalIF":0.0,"publicationDate":"2021-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49634626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-04DOI: 10.1080/01966324.2021.1949412
S. Dey, Mahendra Saha, Sumit Kumar
Abstract The objective of this article is to compare highest posterior density (HPD) credible interval with three bootstrap confidence intervals (BCIs) as well as with asymptotic confidence interval (ACI) using maximum likelihood and Bayesian approaches of a new process capability index, Spmk when the underlying distribution is generalized exponential. This new index can be used for normal as well as non-normal quality characteristics. Through extensive simulation studies and with two real life examples related to industry data, we compare the performances of classical and the Bayes estimates based on different loss functions and compared among the HPD credible intervals, three BCIs and ACIs in terms of coverage probabilities, average width, and respective relative coverages of the index Spmk , respectively.
{"title":"Parametric Confidence Intervals of Spmk for Generalized Exponential Distribution","authors":"S. Dey, Mahendra Saha, Sumit Kumar","doi":"10.1080/01966324.2021.1949412","DOIUrl":"https://doi.org/10.1080/01966324.2021.1949412","url":null,"abstract":"Abstract The objective of this article is to compare highest posterior density (HPD) credible interval with three bootstrap confidence intervals (BCIs) as well as with asymptotic confidence interval (ACI) using maximum likelihood and Bayesian approaches of a new process capability index, Spmk when the underlying distribution is generalized exponential. This new index can be used for normal as well as non-normal quality characteristics. Through extensive simulation studies and with two real life examples related to industry data, we compare the performances of classical and the Bayes estimates based on different loss functions and compared among the HPD credible intervals, three BCIs and ACIs in terms of coverage probabilities, average width, and respective relative coverages of the index Spmk , respectively.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"201 - 222"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2021.1949412","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43817987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-04DOI: 10.1080/01966324.2021.1957731
F. Prataviera
Abstract A reparameterized flexible Weibull distribution indexed by median and a shape parameter is proposed for the development of regression models which includes the possibility of censored data. The reparameterization permits a straightforward interpretation of the regression coefficients in terms of the median. Model estimation is implemented via Classical and Bayesian approaches, and Monte Carlo simulations are carried out in order to evaluate the estimators performances for finite samples. In addition, the model is applied to three real data sets.
{"title":"Reparameterized Flexible Weibull Distribution with Some Applications","authors":"F. Prataviera","doi":"10.1080/01966324.2021.1957731","DOIUrl":"https://doi.org/10.1080/01966324.2021.1957731","url":null,"abstract":"Abstract A reparameterized flexible Weibull distribution indexed by median and a shape parameter is proposed for the development of regression models which includes the possibility of censored data. The reparameterization permits a straightforward interpretation of the regression coefficients in terms of the median. Model estimation is implemented via Classical and Bayesian approaches, and Monte Carlo simulations are carried out in order to evaluate the estimators performances for finite samples. In addition, the model is applied to three real data sets.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"259 - 277"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42835420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}