Pub Date : 2021-01-22DOI: 10.19139/SOIC-2310-5070-1077
Q. Dar, Young-Hyo Ahn, Gulbadin Farooq Dar
The purpose of this study is to introduces a novel methodology to measure the central bank efficiency. The data envelopment analysis (DEA) applies in the combination of three input and two output variables characterizing the economic balance in international trade. Super-efficiency DEA model is applied for ranking & comparing the efficiency of different central banks. In contrast, the Malmquist productivity index (MPI) is used to measure the productivity change over the period of time. Further, the study is extended to quantify the impact of international trade dimension on the efficiency of the central bank by using Tobit regression analysis. Finally, based on our data analysis, we reported that the efficiency changes over the period of time and the total productivity changes significantly due to the technology shift as compared to efficiency change. Additionally, it is also observed that the central bank efficiency is impacted dramatically by the export level of the country as compared to import level, average exchange rate and GDP. It implies that the export level of the country significantly influences the performances of the central bank.
{"title":"The Impact of International Trade on Central Bank Efficiency: An Application of DEA and Tobit Regression Analysis","authors":"Q. Dar, Young-Hyo Ahn, Gulbadin Farooq Dar","doi":"10.19139/SOIC-2310-5070-1077","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-1077","url":null,"abstract":"The purpose of this study is to introduces a novel methodology to measure the central bank efficiency. The data envelopment analysis (DEA) applies in the combination of three input and two output variables characterizing the economic balance in international trade. Super-efficiency DEA model is applied for ranking & comparing the efficiency of different central banks. In contrast, the Malmquist productivity index (MPI) is used to measure the productivity change over the period of time. Further, the study is extended to quantify the impact of international trade dimension on the efficiency of the central bank by using Tobit regression analysis. Finally, based on our data analysis, we reported that the efficiency changes over the period of time and the total productivity changes significantly due to the technology shift as compared to efficiency change. Additionally, it is also observed that the central bank efficiency is impacted dramatically by the export level of the country as compared to import level, average exchange rate and GDP. It implies that the export level of the country significantly influences the performances of the central bank.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"223-240"},"PeriodicalIF":0.0,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43253340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-22DOI: 10.19139/SOIC-2310-5070-861
H. Smithson, J. Sarkar
Allowing several imperfect repairs before a perfect repair can lead to a highly reliable and efficient system by reducing repair time and repair cost. Assuming exponential lifetime and exponential repair time, we determine the optimal probability p of choosing a perfect repair over an imperfect repair after each failure. Based on either the limiting availability or the limiting average repair cost per unit time, we determine the optimal number of imperfect repairs before conducting a perfect repair.
{"title":"System Maintenance Using Several Imperfect Repairs Before a Perfect Repair","authors":"H. Smithson, J. Sarkar","doi":"10.19139/SOIC-2310-5070-861","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-861","url":null,"abstract":"Allowing several imperfect repairs before a perfect repair can lead to a highly reliable and efficient system by reducing repair time and repair cost. Assuming exponential lifetime and exponential repair time, we determine the optimal probability p of choosing a perfect repair over an imperfect repair after each failure. Based on either the limiting availability or the limiting average repair cost per unit time, we determine the optimal number of imperfect repairs before conducting a perfect repair.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"176-188"},"PeriodicalIF":0.0,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41787989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-15DOI: 10.19139/SOIC-2310-5070-919
Walaa A. El-Sharkawy, M. Ismail
This paper deals with testing the number of components in a Birnbaum-Saunders mixture model under randomly right censored data. We focus on two methods, one based on the modified likelihood ratio test and the other based on the shortcut of bootstrap test. Based on extensive Monte Carlo simulation studies, we evaluate and compare the performance of the proposed tests through their size and power. Moreover, a power analysis is provided as a guidance for researchers to examine the factors that affect the power of the proposed tests used in detecting the correct number of components in a Birnbaum-Saunders mixture model. Finally an example of aircraft Windshield data is used to illustrate the testing procedure.
{"title":"Testing the Number of Components in a Birnbaum-Saunders Mixture Model under a Random Censoring Scheme","authors":"Walaa A. El-Sharkawy, M. Ismail","doi":"10.19139/SOIC-2310-5070-919","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-919","url":null,"abstract":"This paper deals with testing the number of components in a Birnbaum-Saunders mixture model under randomly right censored data. We focus on two methods, one based on the modified likelihood ratio test and the other based on the shortcut of bootstrap test. Based on extensive Monte Carlo simulation studies, we evaluate and compare the performance of the proposed tests through their size and power. Moreover, a power analysis is provided as a guidance for researchers to examine the factors that affect the power of the proposed tests used in detecting the correct number of components in a Birnbaum-Saunders mixture model. Finally an example of aircraft Windshield data is used to illustrate the testing procedure.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"157-175"},"PeriodicalIF":0.0,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45930413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-09DOI: 10.19139/SOIC-2310-5070-1000
Cory Ball, Binod Rimal, Sher B. Chhetri
In this article, we introduce a new three-parameter transmuted Cauchy distribution using the quadratic rank transmutation map approach. Some mathematical properties of the proposed model are discussed. A simulation study is conducted using the method of maximum likelihood estimation to estimate the parameters of the model. We use two real data sets and compare various statistics to show the fitting and versatility of the proposed model.
{"title":"A New Generalized Cauchy Distribution with an Application to Annual One Day Maximum Rainfall Data","authors":"Cory Ball, Binod Rimal, Sher B. Chhetri","doi":"10.19139/SOIC-2310-5070-1000","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-1000","url":null,"abstract":"In this article, we introduce a new three-parameter transmuted Cauchy distribution using the quadratic rank transmutation map approach. Some mathematical properties of the proposed model are discussed. A simulation study is conducted using the method of maximum likelihood estimation to estimate the parameters of the model. We use two real data sets and compare various statistics to show the fitting and versatility of the proposed model.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"123-136"},"PeriodicalIF":0.0,"publicationDate":"2021-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47699955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.19139/soic-2310-5070-1175
G Lesaja, G Q Wang, A Oganian
In this paper, an improved Interior-Point Method (IPM) for solving symmetric optimization problems is presented. Symmetric optimization (SO) problems are linear optimization problems over symmetric cones. In particular, the method can be efficiently applied to an important instance of SO, a Controlled Tabular Adjustment (CTA) problem which is a method used for Statistical Disclosure Limitation (SDL) of tabular data. The presented method is a full Nesterov-Todd step infeasible IPM for SO. The algorithm converges to ε-approximate solution from any starting point whether feasible or infeasible. Each iteration consists of the feasibility step and several centering steps, however, the iterates are obtained in the wider neighborhood of the central path in comparison to the similar algorithms of this type which is the main improvement of the method. However, the currently best known iteration bound known for infeasible short-step methods is still achieved.
{"title":"A Full Nesterov-Todd Step Infeasible Interior-point Method for Symmetric Optimization in the Wider Neighborhood of the Central Path.","authors":"G Lesaja, G Q Wang, A Oganian","doi":"10.19139/soic-2310-5070-1175","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1175","url":null,"abstract":"<p><p>In this paper, an improved Interior-Point Method (IPM) for solving symmetric optimization problems is presented. Symmetric optimization (SO) problems are linear optimization problems over symmetric cones. In particular, the method can be efficiently applied to an important instance of SO, a Controlled Tabular Adjustment (CTA) problem which is a method used for Statistical Disclosure Limitation (SDL) of tabular data. The presented method is a full Nesterov-Todd step infeasible IPM for SO. The algorithm converges to <i>ε</i>-approximate solution from any starting point whether feasible or infeasible. Each iteration consists of the feasibility step and several centering steps, however, the iterates are obtained in the wider neighborhood of the central path in comparison to the similar algorithms of this type which is the main improvement of the method. However, the currently best known iteration bound known for infeasible short-step methods is still achieved.</p>","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 2","pages":"250-267"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8205320/pdf/nihms-1695846.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39243357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-24DOI: 10.19139/SOIC-2310-5070-750
Joseph Gogodze
This study proposes a new approach for the solution of multicriteria decision-making problems. The proposed approach is based on using rating/ranking methods. Particularly, in this paper, we investigate the possibility of applying Massey, Colley, Keener, offence-defence, and authority-hub rating methods, which are successfully used in various fields. The proposed approach is useful when no decision-making authority is available or when the relative importance of various criteria has not been previously evaluated. The proposed approach is tested with an example problem to demonstrate its viability and suitability for application.
{"title":"Applications of Some Rating Methods to Solve Multicriteria Decision-Making Problems","authors":"Joseph Gogodze","doi":"10.19139/SOIC-2310-5070-750","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-750","url":null,"abstract":"This study proposes a new approach for the solution of multicriteria decision-making problems. The proposed approach is based on using rating/ranking methods. Particularly, in this paper, we investigate the possibility of applying Massey, Colley, Keener, offence-defence, and authority-hub rating methods, which are successfully used in various fields. The proposed approach is useful when no decision-making authority is available or when the relative importance of various criteria has not been previously evaluated. The proposed approach is tested with an example problem to demonstrate its viability and suitability for application.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86099291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-24DOI: 10.19139/SOIC-2310-5070-1025
Knowledge Chinhamu, Nompilo Mabaso, R. Chifurira
Over the past decade, crude oil prices have risen dramatically, making the oil market very volatile and risky; hence, implementing an efficient risk management tool against market risk is crucial. Value-at-risk (VaR) has become the most common tool in this context to quantify market risk. Financial data typically have certain features such as volatility clustering, asymmetry, and heavy and semi-heavy tails, making it hard, if not impossible, to model them by using a normal distribution. In this paper, we propose the subclasses of the generalised hyperbolic distributions (GHDs), as appropriate models for capturing these characteristics for the crude oil and gasoline returns. We also introduce the new subclass of GHDs, namely normal reciprocal inverse Gaussian distribution (NRIG), in evaluating the VaR for the crude oil and gasoline market. Furthermore, VaR estimation and backtesting procedures using the Kupiec likelihood ratio test are conducted to test the extreme tails of these models. The main findings from the Kupiec likelihood test statistics suggest that the best GHD model should be chosen at various VaR levels. Thus, the final results of this research allow risk managers, financial analysts, and energy market academics to be flexible in choosing a robust risk quantification model for crude oil and gasoline returns at their specific VaR levels of interest. Particularly for NRIG, the results suggest that a better VaR estimation is provided at the long positions.
{"title":"Modelling Crude Oil Returns Using the NRIG Distribution","authors":"Knowledge Chinhamu, Nompilo Mabaso, R. Chifurira","doi":"10.19139/SOIC-2310-5070-1025","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-1025","url":null,"abstract":"Over the past decade, crude oil prices have risen dramatically, making the oil market very volatile and risky; hence, implementing an efficient risk management tool against market risk is crucial. Value-at-risk (VaR) has become the most common tool in this context to quantify market risk. Financial data typically have certain features such as volatility clustering, asymmetry, and heavy and semi-heavy tails, making it hard, if not impossible, to model them by using a normal distribution. In this paper, we propose the subclasses of the generalised hyperbolic distributions (GHDs), as appropriate models for capturing these characteristics for the crude oil and gasoline returns. We also introduce the new subclass of GHDs, namely normal reciprocal inverse Gaussian distribution (NRIG), in evaluating the VaR for the crude oil and gasoline market. Furthermore, VaR estimation and backtesting procedures using the Kupiec likelihood ratio test are conducted to test the extreme tails of these models. The main findings from the Kupiec likelihood test statistics suggest that the best GHD model should be chosen at various VaR levels. Thus, the final results of this research allow risk managers, financial analysts, and energy market academics to be flexible in choosing a robust risk quantification model for crude oil and gasoline returns at their specific VaR levels of interest. Particularly for NRIG, the results suggest that a better VaR estimation is provided at the long positions.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83567372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-08DOI: 10.19139/soic-2310-5070-1034
Rekha, Vikas Kumar
In this paper, we proposed a quantile version of cumulative Renyi entropy for residual and past lifetimes and study their properties. We also study quantile-based cumulative Renyi entropy for extreme order statistic when random variable untruncated or truncated in nature. Some characterization results are studied using the relationship between proposed information measure and reliability measure. We also examine it in relation to some applied problems such as weighted and equillibrium models.
{"title":"Study of Quantile-Based Cumulative Renyi Information Measure","authors":"Rekha, Vikas Kumar","doi":"10.19139/soic-2310-5070-1034","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1034","url":null,"abstract":"In this paper, we proposed a quantile version of cumulative Renyi entropy for residual and past lifetimes and study their properties. We also study quantile-based cumulative Renyi entropy for extreme order statistic when random variable untruncated or truncated in nature. Some characterization results are studied using the relationship between proposed information measure and reliability measure. We also examine it in relation to some applied problems such as weighted and equillibrium models.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74996981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-07DOI: 10.19139/SOIC-2310-5070-1032
A. Chaturvedi, Surinder Kumar
In this paper, we consider Chen distribution and derive UMVUEs and MLEs of the parameter λ, hazard rate h(t) and the two measures of reliability, namely R(t) = P (X > t), where X denotes the lifetime of an item and P = P (X > Y ), which represents the reliability of an item or system of random strength X subject to random stress Y , under type II censoring scheme and the sampling scheme of Bartholomew. We also develop interval estimates of the reliability measures. Testing procedures for the hypotheses related to different parametric functions have also been developed. A comparative study of different methods of point estimation and average confiddence length has been done through simulation studies. The analysis of a real data set is presented for illustration purpose.
本文考虑Chen分布,导出了参数λ、危险率h(t)和可靠性的两个度量的umvue和mle,即R(t) = P (X > t),其中X表示物品的寿命,P = P (X > Y)表示随机强度X的物品或系统在随机应力Y作用下的可靠性,在II型审查方案和Bartholomew抽样方案下。我们还开发了可靠性度量的区间估计。测试程序的假设相关的不同参数函数也已开发。通过仿真研究,对不同的点估计方法和平均置信长度进行了比较研究。为了说明目的,给出了一个真实数据集的分析。
{"title":"Estimation and Testing Procedures for the Reliability Characteristics of Chen Distribution Based on Type II Censoring and the Sampling Scheme of Bartholomew","authors":"A. Chaturvedi, Surinder Kumar","doi":"10.19139/SOIC-2310-5070-1032","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-1032","url":null,"abstract":"In this paper, we consider Chen distribution and derive UMVUEs and MLEs of the parameter λ, hazard rate h(t) and the two measures of reliability, namely R(t) = P (X > t), where X denotes the lifetime of an item and P = P (X > Y ), which represents the reliability of an item or system of random strength X subject to random stress Y , under type II censoring scheme and the sampling scheme of Bartholomew. We also develop interval estimates of the reliability measures. Testing procedures for the hypotheses related to different parametric functions have also been developed. A comparative study of different methods of point estimation and average confiddence length has been done through simulation studies. The analysis of a real data set is presented for illustration purpose.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"99-122"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41911218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}