M. K. Sharma, Mekonnen Tadess, Mohammed Sirage Ibrahim
In the present article, we are presenting row-column designs for Griffing’s complete diallel cross methods (1) for p parents by using a complete set of (p-1) mutually orthogonal Latin squares, when p is prime or a power of prime. The row-column designs for Griffing’s methods (1) are new and universally optimal in the sense of Kempthrone (1956) and Kiefer (1975). The row-column designs for methods (1) are orthogonally blocked designs. In an orthogonally blocked design no loss of efficiency on the comparisons of interest is incurred due to blocking. The analysis includes the analysis of variance (ANOVA), estimation of general combining ability (gca), specific combining ability (sca) and reciprocal combining ability (rca). Tables of universally optimal row-column designs have been provided.
{"title":"Optimal row-column designs for CDC method (1)","authors":"M. K. Sharma, Mekonnen Tadess, Mohammed Sirage Ibrahim","doi":"10.3233/mas-211307","DOIUrl":"https://doi.org/10.3233/mas-211307","url":null,"abstract":"In the present article, we are presenting row-column designs for Griffing’s complete diallel cross methods (1) for p parents by using a complete set of (p-1) mutually orthogonal Latin squares, when p is prime or a power of prime. The row-column designs for Griffing’s methods (1) are new and universally optimal in the sense of Kempthrone (1956) and Kiefer (1975). The row-column designs for methods (1) are orthogonally blocked designs. In an orthogonally blocked design no loss of efficiency on the comparisons of interest is incurred due to blocking. The analysis includes the analysis of variance (ANOVA), estimation of general combining ability (gca), specific combining ability (sca) and reciprocal combining ability (rca). Tables of universally optimal row-column designs have been provided.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43631710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers a family of so-called 2-partitions of some finite set. Each of them divides the set under study into two disjoint parts. Under the assumption that two such partitions are chosen randomly, the exact probability distribution of the special cluster metric on this family is found. On this basis, a new statistical test for checking the significance of differences between 2-partitions is proposed. In addition, the distribution of the values of this metric is found for the case when both partitions are of the ledge type in ordering the set of objects in ascending order of values of some numerical indicator. This means that one of the parts of each partition, which in some sense is the main one, is a segment. The boundaries of such a segment are called normative. By comparing various estimates of the normative boundaries based on sample data, it is introduced the concept of indicative certainty of the numerical indicator. It can be regarded as the degree of confidence in this indicator as a basis for decision whether an object belongs to the main set of the ledge partition. Some application of the results to medical data processing is considered.
{"title":"A new statistical test for distinguishing 2-partitions of a finite set","authors":"S. Dronov","doi":"10.3233/mas-221352","DOIUrl":"https://doi.org/10.3233/mas-221352","url":null,"abstract":"This paper considers a family of so-called 2-partitions of some finite set. Each of them divides the set under study into two disjoint parts. Under the assumption that two such partitions are chosen randomly, the exact probability distribution of the special cluster metric on this family is found. On this basis, a new statistical test for checking the significance of differences between 2-partitions is proposed. In addition, the distribution of the values of this metric is found for the case when both partitions are of the ledge type in ordering the set of objects in ascending order of values of some numerical indicator. This means that one of the parts of each partition, which in some sense is the main one, is a segment. The boundaries of such a segment are called normative. By comparing various estimates of the normative boundaries based on sample data, it is introduced the concept of indicative certainty of the numerical indicator. It can be regarded as the degree of confidence in this indicator as a basis for decision whether an object belongs to the main set of the ledge partition. Some application of the results to medical data processing is considered.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47516480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The work considers statistical techniques developed for solving various special marketing research problems. These approaches include items comparisons in Thurstone and Bradley-Terry scaling, total unduplicated reach and frequency and Shapley value, sample balancing and price sensitivity analysis, customer satisfaction and identification of key drivers, best-worst and max-diff priority estimation, items cannibalization and synergy, and different other methods. The described techniques have been developed and employed in multiple marketing research projects, and they are helpful for successful solving various practical problems.
{"title":"Statistics in marketing research: A brief review on special methods and applications","authors":"S. Lipovetsky","doi":"10.3233/mas-229951","DOIUrl":"https://doi.org/10.3233/mas-229951","url":null,"abstract":"The work considers statistical techniques developed for solving various special marketing research problems. These approaches include items comparisons in Thurstone and Bradley-Terry scaling, total unduplicated reach and frequency and Shapley value, sample balancing and price sensitivity analysis, customer satisfaction and identification of key drivers, best-worst and max-diff priority estimation, items cannibalization and synergy, and different other methods. The described techniques have been developed and employed in multiple marketing research projects, and they are helpful for successful solving various practical problems.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42720492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Larson and Dinse (1985) have introduced the mixture model as an additional competing risks model. In the same article, the authors have suggested that this model can be upscaled to handle the presence of missing failure causes in data. We respond to this proposal in this article and develop a regression model for analysis of data that comes with this complication. We also demonstrate that, with minimal adjustments, the proposed model can be applied in discrete time. This development will be of benefit to discrete time competing risks as analysis of data with this complication is a subject that has not received adequate attention. The mixture model has two components, the incidence and the latency component. It is demonstrated that the parameters related to the model for the latency component as proposed by Larson and Dinse (1985) can be estimated by applying a certain Poisson regression.
{"title":"An EM model for analysis of discrete time competing risks data with missing failure causes","authors":"Bonginkosi D. Ndlovu, S. Melesse, T. Zewotir","doi":"10.3233/mas-211335","DOIUrl":"https://doi.org/10.3233/mas-211335","url":null,"abstract":"Larson and Dinse (1985) have introduced the mixture model as an additional competing risks model. In the same article, the authors have suggested that this model can be upscaled to handle the presence of missing failure causes in data. We respond to this proposal in this article and develop a regression model for analysis of data that comes with this complication. We also demonstrate that, with minimal adjustments, the proposed model can be applied in discrete time. This development will be of benefit to discrete time competing risks as analysis of data with this complication is a subject that has not received adequate attention. The mixture model has two components, the incidence and the latency component. It is demonstrated that the parameters related to the model for the latency component as proposed by Larson and Dinse (1985) can be estimated by applying a certain Poisson regression.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43307939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elisângela C. Biazatti, G. Cordeiro, Maria do Carmo Soares de Lima
A new Dual-Dagum-G (DDa-G) family is defined as a good competitor to the Beta-G and Kumaraswamy-G generators, which are widely applied in several areas. Some of its mathematical properties are addressed. We obtain the maximum likelihood estimates, and some simulations prove the consistency of the estimates. The flexibility of this family is shown through a COVID-19 data set. We propose a new regression based on a special distribution of the DDa-G family, and provide a sensitivity analysis by using data from 1,951 COVID-19 patients collected in Curitiba, Brazil.
{"title":"The Dual-Dagum family of distributions: Properties, regression and applications to COVID-19 data","authors":"Elisângela C. Biazatti, G. Cordeiro, Maria do Carmo Soares de Lima","doi":"10.3233/mas-221354","DOIUrl":"https://doi.org/10.3233/mas-221354","url":null,"abstract":"A new Dual-Dagum-G (DDa-G) family is defined as a good competitor to the Beta-G and Kumaraswamy-G generators, which are widely applied in several areas. Some of its mathematical properties are addressed. We obtain the maximum likelihood estimates, and some simulations prove the consistency of the estimates. The flexibility of this family is shown through a COVID-19 data set. We propose a new regression based on a special distribution of the DDa-G family, and provide a sensitivity analysis by using data from 1,951 COVID-19 patients collected in Curitiba, Brazil.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70130472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, classical as well as Bayesian estimation of stress strength reliability (η) of Dagum distribution under progressive type-II censored sample is done. Maximum likelihood estimators (MLEs) of η are also obtained along with asymptotic, bootstrap-p (boot-p) and bootstrap-t (boot-t) confidence intervals. Bayes estimators of η along with highest posterior density (HPD) credible intervals based on informative and non-informative priors are also obtained. Symmetric as well as asymmetric loss functions are considered for the Bayesian estimation. Monte Carlo simulation study is carried out to check the performance of estimators. Real life data sets are considered for illustration purpose.
{"title":"Estimation of stress-strength reliability for Dagum distribution based on progressive type-II censored sample","authors":"Ritu Kumari, Sangeeta Arora, K. Mahajan","doi":"10.3233/mas-220014","DOIUrl":"https://doi.org/10.3233/mas-220014","url":null,"abstract":"In this paper, classical as well as Bayesian estimation of stress strength reliability (η) of Dagum distribution under progressive type-II censored sample is done. Maximum likelihood estimators (MLEs) of η are also obtained along with asymptotic, bootstrap-p (boot-p) and bootstrap-t (boot-t) confidence intervals. Bayes estimators of η along with highest posterior density (HPD) credible intervals based on informative and non-informative priors are also obtained. Symmetric as well as asymmetric loss functions are considered for the Bayesian estimation. Monte Carlo simulation study is carried out to check the performance of estimators. Real life data sets are considered for illustration purpose.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45629844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 2nd Sustainable Development Goal (SDG) of the United Nations attempt to eliminate the potential hunger and food insecurity issues by 2030, but in the plight of COVID19 pandemic it has become far more critical and persistent issue globally as well as in India. The nation-wide socio-economic surveys of National Sample Survey Office (NSSO) in India are designed to produce reliable and representative estimates of important food insecurity parameters at state and national level for both rural and urban sectors separately but these surveys cannot be used directly to generate reliable district level estimates. Whereas, efficient and representative disaggregated level estimates for the extent (or incidence) of food insecurity prevalence has direct impact on strategizing effective policy plans and monitoring progress towards eliminating food insecurity. In this backdrop, the paper outlines small area estimation approach to estimate the incidence of food insecurity across the districts of rural Uttar Pradesh in India by linking data from the 2011–12 Household Consumer Expenditure Survey of NSSO, and the 2011 Indian Population Census. A spatial map has been generated showing spatial disparity for the incidence of food insecurity in Uttar Pradesh. These disaggregated level estimates are relevant and purposeful for SDG indicator 2.1.2 – severity of food insecurity. The estimates and map of food insecurity incidences are expected to deliver invaluable information to the policy-analysts and decision-makers.
{"title":"Small area estimation of food insecurity prevalence for the state of uttar pradesh in India","authors":"Hukum Chandra, Bhanu Verma","doi":"10.3233/mas-220011","DOIUrl":"https://doi.org/10.3233/mas-220011","url":null,"abstract":"The 2nd Sustainable Development Goal (SDG) of the United Nations attempt to eliminate the potential hunger and food insecurity issues by 2030, but in the plight of COVID19 pandemic it has become far more critical and persistent issue globally as well as in India. The nation-wide socio-economic surveys of National Sample Survey Office (NSSO) in India are designed to produce reliable and representative estimates of important food insecurity parameters at state and national level for both rural and urban sectors separately but these surveys cannot be used directly to generate reliable district level estimates. Whereas, efficient and representative disaggregated level estimates for the extent (or incidence) of food insecurity prevalence has direct impact on strategizing effective policy plans and monitoring progress towards eliminating food insecurity. In this backdrop, the paper outlines small area estimation approach to estimate the incidence of food insecurity across the districts of rural Uttar Pradesh in India by linking data from the 2011–12 Household Consumer Expenditure Survey of NSSO, and the 2011 Indian Population Census. A spatial map has been generated showing spatial disparity for the incidence of food insecurity in Uttar Pradesh. These disaggregated level estimates are relevant and purposeful for SDG indicator 2.1.2 – severity of food insecurity. The estimates and map of food insecurity incidences are expected to deliver invaluable information to the policy-analysts and decision-makers.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"42 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41309797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Getting correct answers to sensitive questions from the respondents and estimating the population parameters on variables that are sensitive in nature is prevailing problem in survey sampling. In the present research paper, the problem of estimation of the population proportion of sensitive characteristics has been studied. For this, an improved randomized response device has been developed by taking the two cases of the unrelated question, case-I: ‘when the proportion of unrelated characteristic is known’ and other case-II: ‘when the proportion of unrelated characteristic is not known’. Two estimators of the population proportion of a sensitive characteristic have been proposed, one for a known value of unrelated characteristic πy and the other for an unknown value, which were found to be unbiased. The expression for variances and unbiased estimates for the variances of the proposed estimators have been obtained. The optimum value of sample sizes has been worked out for which the minimum variance for the proposed estimators has also been obtained. An empirical study has been conducted and concluded graphically that proposed estimators are better than the estimators of Mangat (1992) and Tiwari and Mehta (2016).
{"title":"Improved randomized response technique for estimating population proportion of a sensitive characteristic","authors":"Manpreet Kaur, I. S. Grewal, S. S. Sidhu","doi":"10.3233/mas-220012","DOIUrl":"https://doi.org/10.3233/mas-220012","url":null,"abstract":"Getting correct answers to sensitive questions from the respondents and estimating the population parameters on variables that are sensitive in nature is prevailing problem in survey sampling. In the present research paper, the problem of estimation of the population proportion of sensitive characteristics has been studied. For this, an improved randomized response device has been developed by taking the two cases of the unrelated question, case-I: ‘when the proportion of unrelated characteristic is known’ and other case-II: ‘when the proportion of unrelated characteristic is not known’. Two estimators of the population proportion of a sensitive characteristic have been proposed, one for a known value of unrelated characteristic πy and the other for an unknown value, which were found to be unbiased. The expression for variances and unbiased estimates for the variances of the proposed estimators have been obtained. The optimum value of sample sizes has been worked out for which the minimum variance for the proposed estimators has also been obtained. An empirical study has been conducted and concluded graphically that proposed estimators are better than the estimators of Mangat (1992) and Tiwari and Mehta (2016).","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45753772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Different methods for obtaining new probability distributions have been introduced in the literature in recent years, for example, (Gupta et al., 1998) proposed an interesting uni-parametric lifetime distribution, Exponentiated Gamma (EG), which hazard function has increasing and bathtub shapes. In this paper, we build a new two-parameters distribution, the Marshall Olkin Extended Exponentiated Gamma (MOEEG) distribution, which is derived from the Marshall-Olkin method and the EG distribution. The hazard function of this new distribution can accommodate monotonic, non-monotonic and unimodal shapes, allowing a better fit to greater data variability. In addition to the great flexibility of fitting the data, it contains only two parameters providing a simple parameter estimation procedure, unlike other distributions proposed in the literature that have three or more parameters. Some properties of the new distribution considered in this paper are presented such as n-th time, r-th moment of residual life, r-thmoment of residual life inverted, stochastic ordering, entropy, mean deviation, Bonferroni and Lorenz curve, skewness, kurtosis, order statistics, and stress-strength parameter. We also apply two different estimation methods, maximum likelihood and Bayesian approach. Real data applications are presented to illustrate the usefulness of this new distribution.
近年来,文献中引入了获得新概率分布的不同方法,例如(Gupta et al.,1998)提出了一种有趣的单参数寿命分布,即指数伽马(EG),其危险函数具有递增和浴缸形状。在本文中,我们建立了一个新的双参数分布,即Marshall-Olkin扩展指数伽玛(MOEG)分布,该分布是从Marshall-Orkin方法和EG分布导出的。这种新分布的危险函数可以适应单调、非单调和单峰形状,从而更好地适应更大的数据可变性。除了拟合数据的巨大灵活性外,它只包含两个参数,提供了一个简单的参数估计过程,不像文献中提出的其他具有三个或更多参数的分布。本文给出了新分布的一些性质,如n次、r阶剩余寿命矩、r阶残余寿命反演算法、随机排序、熵、平均偏差、Bonferroni和Lorenz曲线、偏度、峰度、阶统计量和应力强度参数。我们还应用了两种不同的估计方法,最大似然和贝叶斯方法。给出了实际数据应用程序来说明这种新分布的有用性。
{"title":"Marshall olkin extended exponentiated Gamma distribution and its applications","authors":"G. A. S. Aguilar, F. A. Moala, R. P. de Oliveira","doi":"10.3233/mas-220015","DOIUrl":"https://doi.org/10.3233/mas-220015","url":null,"abstract":"Different methods for obtaining new probability distributions have been introduced in the literature in recent years, for example, (Gupta et al., 1998) proposed an interesting uni-parametric lifetime distribution, Exponentiated Gamma (EG), which hazard function has increasing and bathtub shapes. In this paper, we build a new two-parameters distribution, the Marshall Olkin Extended Exponentiated Gamma (MOEEG) distribution, which is derived from the Marshall-Olkin method and the EG distribution. The hazard function of this new distribution can accommodate monotonic, non-monotonic and unimodal shapes, allowing a better fit to greater data variability. In addition to the great flexibility of fitting the data, it contains only two parameters providing a simple parameter estimation procedure, unlike other distributions proposed in the literature that have three or more parameters. Some properties of the new distribution considered in this paper are presented such as n-th time, r-th moment of residual life, r-thmoment of residual life inverted, stochastic ordering, entropy, mean deviation, Bonferroni and Lorenz curve, skewness, kurtosis, order statistics, and stress-strength parameter. We also apply two different estimation methods, maximum likelihood and Bayesian approach. Real data applications are presented to illustrate the usefulness of this new distribution.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49240875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dr. Hukum Chandra","authors":"","doi":"10.3233/mas-220009","DOIUrl":"https://doi.org/10.3233/mas-220009","url":null,"abstract":"","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42234614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}