Pub Date : 2021-02-06DOI: 10.19139/SOIC-2310-5070-671
Héctor Zárate, Edilberto Cepeda
This article extends the fusion among various statistical methods to estimate the mean and variance functions in heteroscedastic semiparametric models when the response variable comes from a two-parameter exponential family distribution. We rely on the natural connection among smoothing methods that use basis functions with penalization, mixed models and a Bayesian Markov Chain sampling simulation methodology. The significance and implications of our strategy lies in its potential to contribute to a simple and unified computational methodology that takes into account the factors that affect the variability in the responses, which in turn is important for an efficient estimation and correct inference of mean parameters without the specification of fully parametric models. An extensive simulation study investigates the performance of the estimates. Finally, an application using the Light Detection and Ranging technique, LIDAR, data highlights the merits of our approach.
{"title":"Semiparametric Smoothing Spline in Joint Mean and Dispersion Models with Responses from the Biparametric Exponential Family: A Bayesian Perspective","authors":"Héctor Zárate, Edilberto Cepeda","doi":"10.19139/SOIC-2310-5070-671","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-671","url":null,"abstract":"This article extends the fusion among various statistical methods to estimate the mean and variance functions in heteroscedastic semiparametric models when the response variable comes from a two-parameter exponential family distribution. We rely on the natural connection among smoothing methods that use basis functions with penalization, mixed models and a Bayesian Markov Chain sampling simulation methodology. The significance and implications of our strategy lies in its potential to contribute to a simple and unified computational methodology that takes into account the factors that affect the variability in the responses, which in turn is important for an efficient estimation and correct inference of mean parameters without the specification of fully parametric models. An extensive simulation study investigates the performance of the estimates. Finally, an application using the Light Detection and Ranging technique, LIDAR, data highlights the merits of our approach.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"351-367"},"PeriodicalIF":0.0,"publicationDate":"2021-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45261141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-30DOI: 10.19139/SOIC-2310-5070-994
A. Belcaid, M. Douimi
In this paper, we focus on the problem of signal smoothing and step-detection for piecewise constant signals. This problem is central to several applications such as human activity analysis, speech or image analysis and anomaly detection in genetics. We present a two-stage approach to approximate the well-known line process energy which arises from the probabilistic representation of the signal and its segmentation. In the first stage, we minimize a total variation (TV) least square problem to detect the majority of the continuous edges. In the second stage, we apply a combinatorial algorithm to filter all false jumps introduced by the TV solution. The performances of the proposed method were tested on several synthetic examples. In comparison to recent step-preserving denoising algorithms, the acceleration presents a superior speed and competitive step-detection quality.
{"title":"Nonconvex Energy Minimization with Unsupervised Line Process Classifier for Efficient Piecewise Constant Signals Reconstruction","authors":"A. Belcaid, M. Douimi","doi":"10.19139/SOIC-2310-5070-994","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-994","url":null,"abstract":"In this paper, we focus on the problem of signal smoothing and step-detection for piecewise constant signals. This problem is central to several applications such as human activity analysis, speech or image analysis and anomaly detection in genetics. We present a two-stage approach to approximate the well-known line process energy which arises from the probabilistic representation of the signal and its segmentation. In the first stage, we minimize a total variation (TV) least square problem to detect the majority of the continuous edges. In the second stage, we apply a combinatorial algorithm to filter all false jumps introduced by the TV solution. The performances of the proposed method were tested on several synthetic examples. In comparison to recent step-preserving denoising algorithms, the acceleration presents a superior speed and competitive step-detection quality.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"435-452"},"PeriodicalIF":0.0,"publicationDate":"2021-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46590671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-22DOI: 10.19139/SOIC-2310-5070-1077
Q. Dar, Young-Hyo Ahn, Gulbadin Farooq Dar
The purpose of this study is to introduces a novel methodology to measure the central bank efficiency. The data envelopment analysis (DEA) applies in the combination of three input and two output variables characterizing the economic balance in international trade. Super-efficiency DEA model is applied for ranking & comparing the efficiency of different central banks. In contrast, the Malmquist productivity index (MPI) is used to measure the productivity change over the period of time. Further, the study is extended to quantify the impact of international trade dimension on the efficiency of the central bank by using Tobit regression analysis. Finally, based on our data analysis, we reported that the efficiency changes over the period of time and the total productivity changes significantly due to the technology shift as compared to efficiency change. Additionally, it is also observed that the central bank efficiency is impacted dramatically by the export level of the country as compared to import level, average exchange rate and GDP. It implies that the export level of the country significantly influences the performances of the central bank.
{"title":"The Impact of International Trade on Central Bank Efficiency: An Application of DEA and Tobit Regression Analysis","authors":"Q. Dar, Young-Hyo Ahn, Gulbadin Farooq Dar","doi":"10.19139/SOIC-2310-5070-1077","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-1077","url":null,"abstract":"The purpose of this study is to introduces a novel methodology to measure the central bank efficiency. The data envelopment analysis (DEA) applies in the combination of three input and two output variables characterizing the economic balance in international trade. Super-efficiency DEA model is applied for ranking & comparing the efficiency of different central banks. In contrast, the Malmquist productivity index (MPI) is used to measure the productivity change over the period of time. Further, the study is extended to quantify the impact of international trade dimension on the efficiency of the central bank by using Tobit regression analysis. Finally, based on our data analysis, we reported that the efficiency changes over the period of time and the total productivity changes significantly due to the technology shift as compared to efficiency change. Additionally, it is also observed that the central bank efficiency is impacted dramatically by the export level of the country as compared to import level, average exchange rate and GDP. It implies that the export level of the country significantly influences the performances of the central bank.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"223-240"},"PeriodicalIF":0.0,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43253340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-22DOI: 10.19139/SOIC-2310-5070-861
H. Smithson, J. Sarkar
Allowing several imperfect repairs before a perfect repair can lead to a highly reliable and efficient system by reducing repair time and repair cost. Assuming exponential lifetime and exponential repair time, we determine the optimal probability p of choosing a perfect repair over an imperfect repair after each failure. Based on either the limiting availability or the limiting average repair cost per unit time, we determine the optimal number of imperfect repairs before conducting a perfect repair.
{"title":"System Maintenance Using Several Imperfect Repairs Before a Perfect Repair","authors":"H. Smithson, J. Sarkar","doi":"10.19139/SOIC-2310-5070-861","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-861","url":null,"abstract":"Allowing several imperfect repairs before a perfect repair can lead to a highly reliable and efficient system by reducing repair time and repair cost. Assuming exponential lifetime and exponential repair time, we determine the optimal probability p of choosing a perfect repair over an imperfect repair after each failure. Based on either the limiting availability or the limiting average repair cost per unit time, we determine the optimal number of imperfect repairs before conducting a perfect repair.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"176-188"},"PeriodicalIF":0.0,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41787989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-15DOI: 10.19139/SOIC-2310-5070-919
Walaa A. El-Sharkawy, M. Ismail
This paper deals with testing the number of components in a Birnbaum-Saunders mixture model under randomly right censored data. We focus on two methods, one based on the modified likelihood ratio test and the other based on the shortcut of bootstrap test. Based on extensive Monte Carlo simulation studies, we evaluate and compare the performance of the proposed tests through their size and power. Moreover, a power analysis is provided as a guidance for researchers to examine the factors that affect the power of the proposed tests used in detecting the correct number of components in a Birnbaum-Saunders mixture model. Finally an example of aircraft Windshield data is used to illustrate the testing procedure.
{"title":"Testing the Number of Components in a Birnbaum-Saunders Mixture Model under a Random Censoring Scheme","authors":"Walaa A. El-Sharkawy, M. Ismail","doi":"10.19139/SOIC-2310-5070-919","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-919","url":null,"abstract":"This paper deals with testing the number of components in a Birnbaum-Saunders mixture model under randomly right censored data. We focus on two methods, one based on the modified likelihood ratio test and the other based on the shortcut of bootstrap test. Based on extensive Monte Carlo simulation studies, we evaluate and compare the performance of the proposed tests through their size and power. Moreover, a power analysis is provided as a guidance for researchers to examine the factors that affect the power of the proposed tests used in detecting the correct number of components in a Birnbaum-Saunders mixture model. Finally an example of aircraft Windshield data is used to illustrate the testing procedure.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"157-175"},"PeriodicalIF":0.0,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45930413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-09DOI: 10.19139/SOIC-2310-5070-1000
Cory Ball, Binod Rimal, Sher B. Chhetri
In this article, we introduce a new three-parameter transmuted Cauchy distribution using the quadratic rank transmutation map approach. Some mathematical properties of the proposed model are discussed. A simulation study is conducted using the method of maximum likelihood estimation to estimate the parameters of the model. We use two real data sets and compare various statistics to show the fitting and versatility of the proposed model.
{"title":"A New Generalized Cauchy Distribution with an Application to Annual One Day Maximum Rainfall Data","authors":"Cory Ball, Binod Rimal, Sher B. Chhetri","doi":"10.19139/SOIC-2310-5070-1000","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-1000","url":null,"abstract":"In this article, we introduce a new three-parameter transmuted Cauchy distribution using the quadratic rank transmutation map approach. Some mathematical properties of the proposed model are discussed. A simulation study is conducted using the method of maximum likelihood estimation to estimate the parameters of the model. We use two real data sets and compare various statistics to show the fitting and versatility of the proposed model.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 1","pages":"123-136"},"PeriodicalIF":0.0,"publicationDate":"2021-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47699955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.19139/soic-2310-5070-1175
G Lesaja, G Q Wang, A Oganian
In this paper, an improved Interior-Point Method (IPM) for solving symmetric optimization problems is presented. Symmetric optimization (SO) problems are linear optimization problems over symmetric cones. In particular, the method can be efficiently applied to an important instance of SO, a Controlled Tabular Adjustment (CTA) problem which is a method used for Statistical Disclosure Limitation (SDL) of tabular data. The presented method is a full Nesterov-Todd step infeasible IPM for SO. The algorithm converges to ε-approximate solution from any starting point whether feasible or infeasible. Each iteration consists of the feasibility step and several centering steps, however, the iterates are obtained in the wider neighborhood of the central path in comparison to the similar algorithms of this type which is the main improvement of the method. However, the currently best known iteration bound known for infeasible short-step methods is still achieved.
{"title":"A Full Nesterov-Todd Step Infeasible Interior-point Method for Symmetric Optimization in the Wider Neighborhood of the Central Path.","authors":"G Lesaja, G Q Wang, A Oganian","doi":"10.19139/soic-2310-5070-1175","DOIUrl":"10.19139/soic-2310-5070-1175","url":null,"abstract":"<p><p>In this paper, an improved Interior-Point Method (IPM) for solving symmetric optimization problems is presented. Symmetric optimization (SO) problems are linear optimization problems over symmetric cones. In particular, the method can be efficiently applied to an important instance of SO, a Controlled Tabular Adjustment (CTA) problem which is a method used for Statistical Disclosure Limitation (SDL) of tabular data. The presented method is a full Nesterov-Todd step infeasible IPM for SO. The algorithm converges to <i>ε</i>-approximate solution from any starting point whether feasible or infeasible. Each iteration consists of the feasibility step and several centering steps, however, the iterates are obtained in the wider neighborhood of the central path in comparison to the similar algorithms of this type which is the main improvement of the method. However, the currently best known iteration bound known for infeasible short-step methods is still achieved.</p>","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"9 2","pages":"250-267"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8205320/pdf/nihms-1695846.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39243357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-24DOI: 10.19139/SOIC-2310-5070-750
Joseph Gogodze
This study proposes a new approach for the solution of multicriteria decision-making problems. The proposed approach is based on using rating/ranking methods. Particularly, in this paper, we investigate the possibility of applying Massey, Colley, Keener, offence-defence, and authority-hub rating methods, which are successfully used in various fields. The proposed approach is useful when no decision-making authority is available or when the relative importance of various criteria has not been previously evaluated. The proposed approach is tested with an example problem to demonstrate its viability and suitability for application.
{"title":"Applications of Some Rating Methods to Solve Multicriteria Decision-Making Problems","authors":"Joseph Gogodze","doi":"10.19139/SOIC-2310-5070-750","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-750","url":null,"abstract":"This study proposes a new approach for the solution of multicriteria decision-making problems. The proposed approach is based on using rating/ranking methods. Particularly, in this paper, we investigate the possibility of applying Massey, Colley, Keener, offence-defence, and authority-hub rating methods, which are successfully used in various fields. The proposed approach is useful when no decision-making authority is available or when the relative importance of various criteria has not been previously evaluated. The proposed approach is tested with an example problem to demonstrate its viability and suitability for application.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86099291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-24DOI: 10.19139/SOIC-2310-5070-1025
Knowledge Chinhamu, Nompilo Mabaso, R. Chifurira
Over the past decade, crude oil prices have risen dramatically, making the oil market very volatile and risky; hence, implementing an efficient risk management tool against market risk is crucial. Value-at-risk (VaR) has become the most common tool in this context to quantify market risk. Financial data typically have certain features such as volatility clustering, asymmetry, and heavy and semi-heavy tails, making it hard, if not impossible, to model them by using a normal distribution. In this paper, we propose the subclasses of the generalised hyperbolic distributions (GHDs), as appropriate models for capturing these characteristics for the crude oil and gasoline returns. We also introduce the new subclass of GHDs, namely normal reciprocal inverse Gaussian distribution (NRIG), in evaluating the VaR for the crude oil and gasoline market. Furthermore, VaR estimation and backtesting procedures using the Kupiec likelihood ratio test are conducted to test the extreme tails of these models. The main findings from the Kupiec likelihood test statistics suggest that the best GHD model should be chosen at various VaR levels. Thus, the final results of this research allow risk managers, financial analysts, and energy market academics to be flexible in choosing a robust risk quantification model for crude oil and gasoline returns at their specific VaR levels of interest. Particularly for NRIG, the results suggest that a better VaR estimation is provided at the long positions.
{"title":"Modelling Crude Oil Returns Using the NRIG Distribution","authors":"Knowledge Chinhamu, Nompilo Mabaso, R. Chifurira","doi":"10.19139/SOIC-2310-5070-1025","DOIUrl":"https://doi.org/10.19139/SOIC-2310-5070-1025","url":null,"abstract":"Over the past decade, crude oil prices have risen dramatically, making the oil market very volatile and risky; hence, implementing an efficient risk management tool against market risk is crucial. Value-at-risk (VaR) has become the most common tool in this context to quantify market risk. Financial data typically have certain features such as volatility clustering, asymmetry, and heavy and semi-heavy tails, making it hard, if not impossible, to model them by using a normal distribution. In this paper, we propose the subclasses of the generalised hyperbolic distributions (GHDs), as appropriate models for capturing these characteristics for the crude oil and gasoline returns. We also introduce the new subclass of GHDs, namely normal reciprocal inverse Gaussian distribution (NRIG), in evaluating the VaR for the crude oil and gasoline market. Furthermore, VaR estimation and backtesting procedures using the Kupiec likelihood ratio test are conducted to test the extreme tails of these models. The main findings from the Kupiec likelihood test statistics suggest that the best GHD model should be chosen at various VaR levels. Thus, the final results of this research allow risk managers, financial analysts, and energy market academics to be flexible in choosing a robust risk quantification model for crude oil and gasoline returns at their specific VaR levels of interest. Particularly for NRIG, the results suggest that a better VaR estimation is provided at the long positions.","PeriodicalId":93376,"journal":{"name":"Statistics, optimization & information computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83567372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}