Pub Date : 2020-05-29DOI: 10.19139/soic-2310-5070-614
Haseeb Athar, Zubdahe Noor, S. Zarrin, H. Almutairi
The Poisson Lomax distribution was proposed by [3], as a useful model for analyzing lifetime data. In this paper,we have derived recurrence relations for single and product moments of generalized order statistics for this distribution. Further, characterization of the distribution is carried out. Some deductions and particular cases are also discussed.
{"title":"Expectation Properties of Generalized Order Statistics from Poisson Lomax Distribution","authors":"Haseeb Athar, Zubdahe Noor, S. Zarrin, H. Almutairi","doi":"10.19139/soic-2310-5070-614","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-614","url":null,"abstract":"The Poisson Lomax distribution was proposed by [3], as a useful model for analyzing lifetime data. In this paper,we have derived recurrence relations for single and product moments of generalized order statistics for this distribution. Further, characterization of the distribution is carried out. Some deductions and particular cases are also discussed.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84195253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-28DOI: 10.19139/soic-2310-5070-648
V. Sivaramaraju, Nilambar Sethi, R. Rajender
Cricket is popularly known as the game of gentlemen. The game of cricket has been introduced to the World by England. Since the introduction till date, it has become the second most ever popular game. In this context, few a data mining and analytical techniques have been proposed for the same. In this work, two different scenario have been considered for the prediction of winning team based on several parameters. These scenario are taken for two different standard formats for the game namely, one day international (ODI) cricket and twenty-twenty cricket (T-20). The prediction approaches differ from each other based on the types of parameters considered and the corresponding functional strategies. The strategies proposed here adopts two different approaches. One approach is for the winner prediction for one-day matches and the other is for predicting the winner for a T-20 match. The approaches have been proposed separately for both the versions of the game pertaining to the intra-variability in the strategies adopted by a team and individuals for each. The proposed strategies for each of the two scenarios have been individually evaluated against existing benchmark works, and for each of the cases the duo of approaches have outperformed the rest in terms of the prediction accuracy. The novel heuristics proposed herewith reflects efficiency and accuracy with respect to prediction of cricket data.
{"title":"Heuristics for Winner Prediction in International Cricket Matches","authors":"V. Sivaramaraju, Nilambar Sethi, R. Rajender","doi":"10.19139/soic-2310-5070-648","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-648","url":null,"abstract":"Cricket is popularly known as the game of gentlemen. The game of cricket has been introduced to the World by England. Since the introduction till date, it has become the second most ever popular game. In this context, few a data mining and analytical techniques have been proposed for the same. In this work, two different scenario have been considered for the prediction of winning team based on several parameters. These scenario are taken for two different standard formats for the game namely, one day international (ODI) cricket and twenty-twenty cricket (T-20). The prediction approaches differ from each other based on the types of parameters considered and the corresponding functional strategies. The strategies proposed here adopts two different approaches. One approach is for the winner prediction for one-day matches and the other is for predicting the winner for a T-20 match. The approaches have been proposed separately for both the versions of the game pertaining to the intra-variability in the strategies adopted by a team and individuals for each. The proposed strategies for each of the two scenarios have been individually evaluated against existing benchmark works, and for each of the cases the duo of approaches have outperformed the rest in terms of the prediction accuracy. The novel heuristics proposed herewith reflects efficiency and accuracy with respect to prediction of cricket data.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"602-609"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45722682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-28DOI: 10.19139/soic-2310-5070-678
M. Ibrahim, E. Ea, H. Yousof
In this paper and after introducing a new model along with its properties, we estimate the unknown parameter of the new model using the maximum likelihood method, Cramér-Von-Mises method, bootstrapping method, least square method and weighted least square method. We assess the performance of all estimation method employing simulations. All methods perform well but bootstrapping method is the best in modeling relief times whereas the maximum likelihood method is the best in modeling survival times. Censored data modeling with covariates is addressed along with the index plot of the modified deviance residuals and its Q-Q plot.
{"title":"A New Distribution for Modeling Lifetime Data with Different Methods of Estimation and Censored Regression Modeling","authors":"M. Ibrahim, E. Ea, H. Yousof","doi":"10.19139/soic-2310-5070-678","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-678","url":null,"abstract":"In this paper and after introducing a new model along with its properties, we estimate the unknown parameter of the new model using the maximum likelihood method, Cramér-Von-Mises method, bootstrapping method, least square method and weighted least square method. We assess the performance of all estimation method employing simulations. All methods perform well but bootstrapping method is the best in modeling relief times whereas the maximum likelihood method is the best in modeling survival times. Censored data modeling with covariates is addressed along with the index plot of the modified deviance residuals and its Q-Q plot.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"610-630"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48725986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-28DOI: 10.19139/soic-2310-5070-802
M. Hashempour, M. Doostparast, Zohreh Pakdaman
This paper deals with systems consisting of independent and heterogeneous exponential components. Since failures of components may change lifetimes of surviving components because of load sharing, a linear trend for conditionally proportional hazard rates is considered. Estimates of parameters, both point and interval estimates, are derived on the basis of observed component failures for s(≥ 2) systems. Fisher information matrix of the available data is also obtained which can be used for studying asymptotic behaviour of estimates. The generalized likelihood ratio test is implemented for testing homogeneity of s systems. Illustrative examples are also given.
{"title":"Statistical Inference on the Basis of Sequential Order Statistics under a Linear Trend for Conditional Proportional Hazard Rates","authors":"M. Hashempour, M. Doostparast, Zohreh Pakdaman","doi":"10.19139/soic-2310-5070-802","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-802","url":null,"abstract":"This paper deals with systems consisting of independent and heterogeneous exponential components. Since failures of components may change lifetimes of surviving components because of load sharing, a linear trend for conditionally proportional hazard rates is considered. Estimates of parameters, both point and interval estimates, are derived on the basis of observed component failures for s(≥ 2) systems. Fisher information matrix of the available data is also obtained which can be used for studying asymptotic behaviour of estimates. The generalized likelihood ratio test is implemented for testing homogeneity of s systems. Illustrative examples are also given.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"462-470"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45751076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-28DOI: 10.19139/soic-2310-5070-751
Narinder Pushkarna, J. Saran, Kanika Verma
In this paper some recurrence relations satisfied by single and product moments of progressively Type-II right censored order statistics from Hjorth distribution have been obtained. Then we use these results to compute the moments for all sample sizes and all censoring schemes (R1, R2, ..., Rm),m ≤ n, which allow us to obtain BLUEs of location and scale parameters based on progressively Type-II right censored samples. The best linear unbiased predictors of censored failure times are then discussed briefly. Finally, a numerical example with real data is presented to illustrate the inferential method developed here.
{"title":"Progressively Type-II Right Censored Order Statistics from Hjorth Distribution and Related Inference","authors":"Narinder Pushkarna, J. Saran, Kanika Verma","doi":"10.19139/soic-2310-5070-751","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-751","url":null,"abstract":"In this paper some recurrence relations satisfied by single and product moments of progressively Type-II right censored order statistics from Hjorth distribution have been obtained. Then we use these results to compute the moments for all sample sizes and all censoring schemes (R1, R2, ..., Rm),m ≤ n, which allow us to obtain BLUEs of location and scale parameters based on progressively Type-II right censored samples. The best linear unbiased predictors of censored failure times are then discussed briefly. Finally, a numerical example with real data is presented to illustrate the inferential method developed here.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"481-498"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48966773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-28DOI: 10.19139/soic-2310-5070-638
G. Askari, M. Gordji
In this paper, we provide an interpretation of the rationality in game theory in which player consider the profit or loss of the opponent in addition to personal profit at the game. The goal of a game analysis with two hyper-rationality players is to provide insight into real-world situations that are often more complex than a game with two rational players where the choices of strategy are only based on individual preferences. The hyper-rationality does not mean perfect rationality but an insight toward how human decision-makers behave in interactive decisions. The findings of this research can help to enlarge our understanding of the psychological aspects of strategy choices in games and also provide an analysis of the decision-making process with cognitive economics approach at the same time.
{"title":"Decision Making: Rational Choice or Hyper-Rational Choice","authors":"G. Askari, M. Gordji","doi":"10.19139/soic-2310-5070-638","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-638","url":null,"abstract":"In this paper, we provide an interpretation of the rationality in game theory in which player consider the profit or loss of the opponent in addition to personal profit at the game. The goal of a game analysis with two hyper-rationality players is to provide insight into real-world situations that are often more complex than a game with two rational players where the choices of strategy are only based on individual preferences. The hyper-rationality does not mean perfect rationality but an insight toward how human decision-makers behave in interactive decisions. The findings of this research can help to enlarge our understanding of the psychological aspects of strategy choices in games and also provide an analysis of the decision-making process with cognitive economics approach at the same time.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"583-589"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46911344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-28DOI: 10.19139/soic-2310-5070-869
Rafid S. A. Alshkaki
In this paper, a generalized modification of the Kumaraswamy distribution is proposed, and its distributional and characterizing properties are studied. This distribution is closed under scaling and exponentiation, and has some well-known distributions as special cases, such as the generalized uniform, triangular, beta, power function, Minimax, and some other Kumaraswamy related distributions. Moment generating function, Lorenz and Bonferroni curves, with its moments consisting of the mean, variance, moments about the origin, harmonic, incomplete, probability weighted, L, and trimmed L moments, are derived. The maximum likelihood estimation method is used for estimating its parameters and applied to six different simulated data sets of this distribution, in order to check the performance of the estimation method through the estimated parameters mean squares errors computed from the different simulated sample sizes. Finally, four real-life data sets are used to illustrate the usefulness and the flexibility of this distribution in application to real-life data.
{"title":"A Generalized Modification of the Kumaraswamy Distribution for Modeling and Analyzing Real-Life Data","authors":"Rafid S. A. Alshkaki","doi":"10.19139/soic-2310-5070-869","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-869","url":null,"abstract":"In this paper, a generalized modification of the Kumaraswamy distribution is proposed, and its distributional and characterizing properties are studied. This distribution is closed under scaling and exponentiation, and has some well-known distributions as special cases, such as the generalized uniform, triangular, beta, power function, Minimax, and some other Kumaraswamy related distributions. Moment generating function, Lorenz and Bonferroni curves, with its moments consisting of the mean, variance, moments about the origin, harmonic, incomplete, probability weighted, L, and trimmed L moments, are derived. The maximum likelihood estimation method is used for estimating its parameters and applied to six different simulated data sets of this distribution, in order to check the performance of the estimation method through the estimated parameters mean squares errors computed from the different simulated sample sizes. Finally, four real-life data sets are used to illustrate the usefulness and the flexibility of this distribution in application to real-life data.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"521-548"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41871209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-27DOI: 10.19139/soic-2310-5070-628
B. Sartono, A. Syaiful, Dian Ayuningtyas, F. Afendi, R. Anisa, A. Salim
The sparsity principle suggests that the number of effects that contribute significantly to the response variable of an experiment is small. It means that the researchers need an efficient selection procedure to identify those active effects. Most common procedures can be found in literature work by considering an effect as an individual entity so that selection process works on individual effect. Another principle we should consider in experimental data analysis is the heredity principle. This principle allows an interaction effect is included in the model only if the correspondence main effects are there in. This paper addresses the selection problem that takes into account the heredity principle as Yuan and Lin [23] did using least angle regression (LARS). Instead of selecting the effects individually, the proposed approach perform the selection process in groups. The advantage our proposed approach, using genetic algorithm, is on the opportunity to determine the number of desired effect, which the LARS approach cannot.
{"title":"Active Effects Selection which Considers Heredity Principle in Multi-Factor Experiment Data Analysis","authors":"B. Sartono, A. Syaiful, Dian Ayuningtyas, F. Afendi, R. Anisa, A. Salim","doi":"10.19139/soic-2310-5070-628","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-628","url":null,"abstract":"The sparsity principle suggests that the number of effects that contribute significantly to the response variable of an experiment is small. It means that the researchers need an efficient selection procedure to identify those active effects. Most common procedures can be found in literature work by considering an effect as an individual entity so that selection process works on individual effect. Another principle we should consider in experimental data analysis is the heredity principle. This principle allows an interaction effect is included in the model only if the correspondence main effects are there in. This paper addresses the selection problem that takes into account the heredity principle as Yuan and Lin [23] did using least angle regression (LARS). Instead of selecting the effects individually, the proposed approach perform the selection process in groups. The advantage our proposed approach, using genetic algorithm, is on the opportunity to determine the number of desired effect, which the LARS approach cannot.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"414-424"},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42423398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-27DOI: 10.19139/soic-2310-5070-786
Jitendra Kumar, V. Varun, Dhirendra Kumar, A. Chaturvedi
The objective of present study is to develop a time series model for handling the non-linear trend process using a spline function. Spline function is a piecewise polynomial segment concerning the time component. The main advantage of spline function is the approximation, non linear time trend, but linear time trend between the consecutive join points. A unit root hypothesis is projected to test the non stationarity due to presence of unit root in the proposed model. In the autoregressive model with linear trend, the time trend vanishes under the unit root case. However, when non-linear trend is present and approximated by the linear spline function, through the trend component is absent under the unit root case, but the intercept term makes a shift with r knots. For decision making under the Bayesian perspective, the posterior odds ratio is used for hypothesis testing problems. We have derived the posterior probability for the assumed hypotheses under appropriate prior information. A simulation study and an empirical application are presented to examine the performance of theoretical outcomes.
{"title":"Bayesian Unit Root Test for AR(1) Model with Trend Approximated","authors":"Jitendra Kumar, V. Varun, Dhirendra Kumar, A. Chaturvedi","doi":"10.19139/soic-2310-5070-786","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-786","url":null,"abstract":"The objective of present study is to develop a time series model for handling the non-linear trend process using a spline function. Spline function is a piecewise polynomial segment concerning the time component. The main advantage of spline function is the approximation, non linear time trend, but linear time trend between the consecutive join points. A unit root hypothesis is projected to test the non stationarity due to presence of unit root in the proposed model. In the autoregressive model with linear trend, the time trend vanishes under the unit root case. However, when non-linear trend is present and approximated by the linear spline function, through the trend component is absent under the unit root case, but the intercept term makes a shift with r knots. For decision making under the Bayesian perspective, the posterior odds ratio is used for hypothesis testing problems. We have derived the posterior probability for the assumed hypotheses under appropriate prior information. A simulation study and an empirical application are presented to examine the performance of theoretical outcomes.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"8 1","pages":"425-461"},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47718971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-27DOI: 10.19139/soic-2310-5070-937
M’barek Iaousse, Amal Hmimou, Zouhair El Hadri, Yousfi El Kettani
Structural Equation Modeling (SEM) is a statistical technique that assesses a hypothesized causal model byshowing whether or not, it fits the available data. One of the major steps in SEM is the computation of the covariance matrix implied by the specified model. This matrix is crucial in estimating the parameters, testing the validity of the model and, make useful interpretations. In the present paper, two methods used for this purpose are presented: the J¨oreskog’s formula and the finite iterative method. These methods are characterized by the manner of the computation and based on some apriori assumptions. To make the computation more simplistic and the assumptions less restrictive, a new algorithm for the computation of the implied covariance matrix is introduced. It consists of a modification of the finite iterative method. An illustrative example of the proposed method is presented. Furthermore, theoretical and numerical comparisons between the exposed methods with the proposed algorithm are discussed and illustrated
{"title":"A Modified Algorithm for the Computation of the Covariance Matrix Implied by a Structural Recursive Model with Latent Variables Using the Finite Iterative Method","authors":"M’barek Iaousse, Amal Hmimou, Zouhair El Hadri, Yousfi El Kettani","doi":"10.19139/soic-2310-5070-937","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-937","url":null,"abstract":"Structural Equation Modeling (SEM) is a statistical technique that assesses a hypothesized causal model byshowing whether or not, it fits the available data. One of the major steps in SEM is the computation of the covariance matrix implied by the specified model. This matrix is crucial in estimating the parameters, testing the validity of the model and, make useful interpretations. In the present paper, two methods used for this purpose are presented: the J¨oreskog’s formula and the finite iterative method. These methods are characterized by the manner of the computation and based on some apriori assumptions. To make the computation more simplistic and the assumptions less restrictive, a new algorithm for the computation of the implied covariance matrix is introduced. It consists of a modification of the finite iterative method. An illustrative example of the proposed method is presented. Furthermore, theoretical and numerical comparisons between the exposed methods with the proposed algorithm are discussed and illustrated","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90678282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}