Pub Date : 2024-01-05DOI: 10.19139/soic-2310-5070-1111
D. Chikobvu, Domingo Pavolo
Multiresponse surface methodology often involves small data analytics which, statistically, have regression modelling credibility problems. This is worsened by dataset, model selection and solution methodology uncertainties. It is difficult for solution methodologies which select and use single best models per response at simultaneous optimisation to effectively deal with these problems. This paper exploited the fact that model selection criteria choose differently, in a flexible hybrid ensemble system, to generate several solutions for integration and comparison. Mean square prediction error, with bias-variance-covariance decomposition values, was computed and analysed at simultaneous optimisation. Results suggest that the credibility of the final solution is enhanced when working with multiple models, solution methodologies and results. However, the results do not show any significance of small sample size correction to model selection criteria and analysis of bias-variance-covariance decompositions at simultaneous optimisation does not encourage dependence on theoretical optimality for best results.
{"title":"Solving a Typical Small Sample Size MRSM Dataset Problem Using a Flexible Hybrid Ensemble Approach for Credibility","authors":"D. Chikobvu, Domingo Pavolo","doi":"10.19139/soic-2310-5070-1111","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1111","url":null,"abstract":"Multiresponse surface methodology often involves small data analytics which, statistically, have regression modelling credibility problems. This is worsened by dataset, model selection and solution methodology uncertainties. It is difficult for solution methodologies which select and use single best models per response at simultaneous optimisation to effectively deal with these problems. This paper exploited the fact that model selection criteria choose differently, in a flexible hybrid ensemble system, to generate several solutions for integration and comparison. Mean square prediction error, with bias-variance-covariance decomposition values, was computed and analysed at simultaneous optimisation. Results suggest that the credibility of the final solution is enhanced when working with multiple models, solution methodologies and results. However, the results do not show any significance of small sample size correction to model selection criteria and analysis of bias-variance-covariance decompositions at simultaneous optimisation does not encourage dependence on theoretical optimality for best results.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"4 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140513741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.19139/soic-2310-5070-1887
Suryadi, M. Romadona, Sigit Setiawan, Fachrizal, Andi Budiansyah, Syahrizal Maulana, Rahmi Lestari Helmi, Silmi Tsurayya, RY Kun Haribowo, Yuni Andari, Bagaskara, Ratna Sri Harjanti
The linear regression model is used in this research to study the influence of the independent variable on the dependent variable. The dependent variable Y is the unemployment rate in vocational education, while the independent variables are X1 in the form of Job Opportunities, X2 in the form of Policy and X3 in the form of Area. To estimate model parameters, the Ordinary Least Square method is used. The research results show that the three independent variables have a significant effect on the dependent variable. Variable X1 has a significant positive effect on the unemployment rate, variables X2 and X3 have a significant negative effect on the unemployment rate in vocational higher education in Indonesia. From the results of this research, there has been an oversupply of labor in vocational higher education in Indonesia.
{"title":"Unemployment Rates in Vocational Education in Indonesia Using Economic and Statistical Analysis","authors":"Suryadi, M. Romadona, Sigit Setiawan, Fachrizal, Andi Budiansyah, Syahrizal Maulana, Rahmi Lestari Helmi, Silmi Tsurayya, RY Kun Haribowo, Yuni Andari, Bagaskara, Ratna Sri Harjanti","doi":"10.19139/soic-2310-5070-1887","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1887","url":null,"abstract":"The linear regression model is used in this research to study the influence of the independent variable on the dependent variable. The dependent variable Y is the unemployment rate in vocational education, while the independent variables are X1 in the form of Job Opportunities, X2 in the form of Policy and X3 in the form of Area. To estimate model parameters, the Ordinary Least Square method is used. The research results show that the three independent variables have a significant effect on the dependent variable. Variable X1 has a significant positive effect on the unemployment rate, variables X2 and X3 have a significant negative effect on the unemployment rate in vocational higher education in Indonesia. From the results of this research, there has been an oversupply of labor in vocational higher education in Indonesia.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"31 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140513528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.19139/soic-2310-5070-1837
Mohamed Saidane
In this paper, we deal with the estimation of two widely used risk measures such as Value-at-Risk (VaR) and Expected Shortfall (ES) in a cryptocurrency context. To face the presence of regime switching in the cryptocurrency volatilities and the dynamic interconnection between them, we propose a Monte Carlo-based approach using heteroskedastic factor analysis and hidden Markov models (HMM) combined with a structured variational Expectation-Maximization (EM) learning approach. This composite approach allows the construction of a diversified portfolio and determines an optimal allocation strategy making it possible to minimize the conditional risk of the portfolio and maximize the return. The out-of-sample prediction experiments show that the composite factorial HMM approach performs better, in terms of prediction accuracy, than some other baseline methods presented in the literature. Moreover, our results show that the proposed methodology provides the best performing crypto-asset allocation strategies and it is also clearly superior to the existing methods in VaR and ES predictions.
在本文中,我们讨论了在加密货币背景下对风险价值(VaR)和预期缺口(ES)这两种广泛使用的风险度量的估算。面对加密货币波动率中存在的制度转换以及它们之间的动态相互联系,我们提出了一种基于蒙特卡罗的方法,使用异方差因子分析和隐马尔可夫模型(HMM),并结合结构化变异期望最大化(EM)学习方法。这种复合方法可以构建多样化的投资组合,并确定最佳分配策略,从而使投资组合的条件风险最小化,收益最大化。样本外预测实验表明,复合因子 HMM 方法在预测准确性方面优于文献中介绍的其他一些基准方法。此外,我们的结果表明,所提出的方法提供了性能最佳的加密资产配置策略,而且在 VaR 和 ES 预测方面也明显优于现有方法。
{"title":"Risk assessment in cryptocurrency portfolios: a composite hidden Markov factor analysis framework","authors":"Mohamed Saidane","doi":"10.19139/soic-2310-5070-1837","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1837","url":null,"abstract":"In this paper, we deal with the estimation of two widely used risk measures such as Value-at-Risk (VaR) and Expected Shortfall (ES) in a cryptocurrency context. To face the presence of regime switching in the cryptocurrency volatilities and the dynamic interconnection between them, we propose a Monte Carlo-based approach using heteroskedastic factor analysis and hidden Markov models (HMM) combined with a structured variational Expectation-Maximization (EM) learning approach. This composite approach allows the construction of a diversified portfolio and determines an optimal allocation strategy making it possible to minimize the conditional risk of the portfolio and maximize the return. The out-of-sample prediction experiments show that the composite factorial HMM approach performs better, in terms of prediction accuracy, than some other baseline methods presented in the literature. Moreover, our results show that the proposed methodology provides the best performing crypto-asset allocation strategies and it is also clearly superior to the existing methods in VaR and ES predictions.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140513747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-27DOI: 10.19139/soic-2310-5070-1912
Huiying Huang, Shaoting Peng, Gaohang Yu, Jinhong Huang, Wenyu Hu
Hyperspectral images (HSI) are often degraded by various types of noise during the acquisition process, such as Gaussian noise, impulse noise, dead lines and stripes, etc. Recently, there exists a growing attenrion on low-rank matrix/tensor-based methods for HSI data restoration, assuming that the overall data is low-rank. However, the assumption of overall low-rankness often proves inaccurate due to the spatially heterogeneous local similarity characteristics of HSI. Traditional cube-based methods involve dividing the HSI into fixed-size cubes. However, using fixed-size cubes does not provide flexible coverage of locally similar regions at varying scales. Inspired by superpixel segmentation, this paper proposes the Shrink Low-rank Super-tensor (SLRST) approach for HSI recovery. Instead of using fixed-size cubes, SLRST employs a size-adaptive super-tensor. The proposed approach is effectively solved using the Alternating Direction Method of Multipliers (ADMM). Numerical experiments on HSI data verify that the proposed method outperforms other competing methods.
{"title":"Hyperspectral image restoration based on color superpixel segmentation","authors":"Huiying Huang, Shaoting Peng, Gaohang Yu, Jinhong Huang, Wenyu Hu","doi":"10.19139/soic-2310-5070-1912","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1912","url":null,"abstract":"Hyperspectral images (HSI) are often degraded by various types of noise during the acquisition process, such as Gaussian noise, impulse noise, dead lines and stripes, etc. Recently, there exists a growing attenrion on low-rank matrix/tensor-based methods for HSI data restoration, assuming that the overall data is low-rank. However, the assumption of overall low-rankness often proves inaccurate due to the spatially heterogeneous local similarity characteristics of HSI. Traditional cube-based methods involve dividing the HSI into fixed-size cubes. However, using fixed-size cubes does not provide flexible coverage of locally similar regions at varying scales. Inspired by superpixel segmentation, this paper proposes the Shrink Low-rank Super-tensor (SLRST) approach for HSI recovery. Instead of using fixed-size cubes, SLRST employs a size-adaptive super-tensor. The proposed approach is effectively solved using the Alternating Direction Method of Multipliers (ADMM). Numerical experiments on HSI data verify that the proposed method outperforms other competing methods.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"52 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139154506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-17DOI: 10.19139/soic-2310-5070-1321
M. Eliwa, Abhishek Tyagi, Morad Alizadeh, M. El-Morshedy
In this article, we introduce some reliability concepts for the bivariate Pareto Type II distribution including joint hazard rate function, CDF for parallel and series systems, joint mean residual lifetime, and joint vitality function. The maximum likelihood and Bayesian estimation methods are utilized to estimate the model parameters. Simulation is carried out to assess the performance of the maximum likelihood and Bayesian estimators, and it is found that the two approaches work quite well in estimation process. Finally, a real lifetime data is analyzed to show the flexibility and the importance of the introduced bivariate mode.
本文介绍了双变量帕累托 II 型分布的一些可靠性概念,包括联合危险率函数、并联和串联系统的 CDF、联合平均残余寿命和联合活力函数。利用最大似然法和贝叶斯估计法来估计模型参数。通过模拟来评估最大似然估计法和贝叶斯估计法的性能,发现这两种方法在估计过程中效果相当好。最后,对一个真实的生命周期数据进行了分析,以显示所引入的双变量模式的灵活性和重要性。
{"title":"Failure rate, vitality, and residual lifetime measures: Characterizations based on stress-strength bivariate model with application to an automated life test data","authors":"M. Eliwa, Abhishek Tyagi, Morad Alizadeh, M. El-Morshedy","doi":"10.19139/soic-2310-5070-1321","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1321","url":null,"abstract":"In this article, we introduce some reliability concepts for the bivariate Pareto Type II distribution including joint hazard rate function, CDF for parallel and series systems, joint mean residual lifetime, and joint vitality function. The maximum likelihood and Bayesian estimation methods are utilized to estimate the model parameters. Simulation is carried out to assess the performance of the maximum likelihood and Bayesian estimators, and it is found that the two approaches work quite well in estimation process. Finally, a real lifetime data is analyzed to show the flexibility and the importance of the introduced bivariate mode.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139265043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.19139/soic-2310-5070-1790
Samet Ahmed, Kourd Yahia
In this work, we propose the design and implementation of a parallel-structured fuzzy logic controller with integral action and anti-windup. The Grey Wolf Optimization (GWO) optimization technique is used to optimize fuzzy rules, which allows for the complicated algebraic ideas of type 1 fuzzy logic algorithms to be reduced to straightforward numerical equations for FPGA target implementation. The techniques for operating a geared DC motor are optimized by the membership function structure of our controller's data propagation. Our proposed controller was implemented in Xilinx System Generator (XSG) and co-simulated on hardware and software with VIVADO and XSG tools.
{"title":"Implementation of Fuzzy Logic Controller Algorithms with MF optimization on FPGA","authors":"Samet Ahmed, Kourd Yahia","doi":"10.19139/soic-2310-5070-1790","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1790","url":null,"abstract":"In this work, we propose the design and implementation of a parallel-structured fuzzy logic controller with integral action and anti-windup. The Grey Wolf Optimization (GWO) optimization technique is used to optimize fuzzy rules, which allows for the complicated algebraic ideas of type 1 fuzzy logic algorithms to be reduced to straightforward numerical equations for FPGA target implementation. The techniques for operating a geared DC motor are optimized by the membership function structure of our controller's data propagation. Our proposed controller was implemented in Xilinx System Generator (XSG) and co-simulated on hardware and software with VIVADO and XSG tools.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"32 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139278843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.19139/soic-2310-5070-1572
F. Opone, Kadir Karakaya, Ngozi O. Ubaka
This paper presents a statistical analysis of Covid-19 data using the Odd log logistic kumaraswamy Kumaraswamy (OLLK) distribution. Some mathematical properties of the proposed OLLK distribution such as the survival and hazard functions, quantile function, ordinary and incomplete moments, moment generating function, probability weighted moment, distribution of order statistic and Renyi entropy were derived. Five estimators are examined for unknown model parameters. The performance of the estimators is compared using an extensive simulation study based on the bias and mean square error criteria. Two Covid-19 data sets representing the percentage of daily recoveries of Covid-19 patients are used to illustrate the applicability of the proposed OLLK distribution. Results revealed that the OLLK distribution is a better alternative to some existing models with bounded support.
{"title":"Statistical Analysis of Covid-19 Data using the Odd Log Logistic Kumaraswamy Distribution","authors":"F. Opone, Kadir Karakaya, Ngozi O. Ubaka","doi":"10.19139/soic-2310-5070-1572","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1572","url":null,"abstract":"This paper presents a statistical analysis of Covid-19 data using the Odd log logistic kumaraswamy Kumaraswamy (OLLK) distribution. Some mathematical properties of the proposed OLLK distribution such as the survival and hazard functions, quantile function, ordinary and incomplete moments, moment generating function, probability weighted moment, distribution of order statistic and Renyi entropy were derived. Five estimators are examined for unknown model parameters. The performance of the estimators is compared using an extensive simulation study based on the bias and mean square error criteria. Two Covid-19 data sets representing the percentage of daily recoveries of Covid-19 patients are used to illustrate the applicability of the proposed OLLK distribution. Results revealed that the OLLK distribution is a better alternative to some existing models with bounded support.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139278356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.19139/soic-2310-5070-1685
M. Rasekhi, M. Saber, H. Yousof, Emadeldin I. A. Ali
The advantages of applying multicomponent stress-strength models lie in their ability to provide a comprehensive and accurate analysis of system reliability under real-world conditions. By accounting for the interactions between different stress components and identifying critical weaknesses, engineers can make informed decisions, leading to safer and more reliable designs. The primary emphasis of this research is placed on the Bayesian and classical estimations of a multicomponent stress-strength reliability model that is derived from the bounded Topp Leone distribution. It is presumable that both stress and strength follow a Topp Leone distribution, but the shape parameters of each variable differ, and the scale parameters (which determine where the variable is bounded) remain the same. Statisticians utilize approaches such as maximum likelihood paired with parametric and non-parametric bootstrap, as well as Bayesian methods, in order to evaluate the dependability of a system. Bayesian methods are also utilized. Simulation studies are carried out with the intention of establishing the degree of precision that may be achieved by employing the various methods of estimating. For the sake of this example, two genuine data sets are dissected and examined in detail.
{"title":"Estimation of the Multicomponent Stress-Strength Reliability Model Under the Topp-Leone Distribution: Applications, Bayesian and Non-Bayesian Assessement","authors":"M. Rasekhi, M. Saber, H. Yousof, Emadeldin I. A. Ali","doi":"10.19139/soic-2310-5070-1685","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1685","url":null,"abstract":"The advantages of applying multicomponent stress-strength models lie in their ability to provide a comprehensive and accurate analysis of system reliability under real-world conditions. By accounting for the interactions between different stress components and identifying critical weaknesses, engineers can make informed decisions, leading to safer and more reliable designs. The primary emphasis of this research is placed on the Bayesian and classical estimations of a multicomponent stress-strength reliability model that is derived from the bounded Topp Leone distribution. It is presumable that both stress and strength follow a Topp Leone distribution, but the shape parameters of each variable differ, and the scale parameters (which determine where the variable is bounded) remain the same. Statisticians utilize approaches such as maximum likelihood paired with parametric and non-parametric bootstrap, as well as Bayesian methods, in order to evaluate the dependability of a system. Bayesian methods are also utilized. Simulation studies are carried out with the intention of establishing the degree of precision that may be achieved by employing the various methods of estimating. For the sake of this example, two genuine data sets are dissected and examined in detail.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139279113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.19139/soic-2310-5070-1673
B. Oluyede, B. Tlhaloganyang, Whatmore Sengweni
This paper proposes a new generalized family of distributions called the Topp-Leone odd Burr X-G (TLOBX-G) distribution and its special model, Topp-Leone odd Burr X-Weibull (TLOBX-W) is studied in detail. Structural properties are derived, including the hazard rate function, quantile function, density expansion, moments, R'enyi entropy, and order statistics. The maximum likelihood technique is used to estimate the parameters of the new family of distributions and a simulation study was carried out to assess the accuracy and consistency of these estimators. Finally, the applicability, usefulness, and flexibility of TLOBX-W distribution are illustrated using two real-life datasets.
{"title":"The Topp-Leone Odd Burr X-G Family of Distributions: Properties and Applications","authors":"B. Oluyede, B. Tlhaloganyang, Whatmore Sengweni","doi":"10.19139/soic-2310-5070-1673","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1673","url":null,"abstract":"This paper proposes a new generalized family of distributions called the Topp-Leone odd Burr X-G (TLOBX-G) distribution and its special model, Topp-Leone odd Burr X-Weibull (TLOBX-W) is studied in detail. Structural properties are derived, including the hazard rate function, quantile function, density expansion, moments, R'enyi entropy, and order statistics. The maximum likelihood technique is used to estimate the parameters of the new family of distributions and a simulation study was carried out to assess the accuracy and consistency of these estimators. Finally, the applicability, usefulness, and flexibility of TLOBX-W distribution are illustrated using two real-life datasets.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"79 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139279086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.19139/soic-2310-5070-1766
Oussama Sbayti, Khalid Housni
Vehicular Ad hoc Networks (VANETs) face significant challenges in providing high-quality service. These networks enable vehicles to exchange critical information, such as road obstacles and accidents, and support various communication modes known as Vehicle-to-Everything (V2X). This research paper proposes an intelligent method to improve the quality of service by optimizing path selection between vehicles, aiming to minimize network overhead and enhance routing efficiency. The proposed approach integrates Ant Colony Optimization (ACO) into the Optimized Link State Routing (OLSR) protocol. The effectiveness of this method is validated through implementation and simulation experiments conducted using the Simulation of Urban Mobility (SUMO) and the network simulator (NS3). Simulation results demonstrate that the proposed method outperforms the traditional OLSR algorithm in terms of throughput, average packet delivery rate (PDR), end-to-end delay (E2ED), and average routing overhead.
车载 Ad hoc 网络(VANET)在提供高质量服务方面面临巨大挑战。这些网络使车辆能够交换道路障碍和事故等重要信息,并支持各种通信模式,即车对物(V2X)。本研究论文提出了一种通过优化车辆间路径选择来提高服务质量的智能方法,旨在最大限度地减少网络开销并提高路由效率。所提出的方法将蚁群优化(ACO)集成到优化链路状态路由(OLSR)协议中。通过使用城市移动性仿真(SUMO)和网络仿真器(NS3)进行实施和仿真实验,验证了该方法的有效性。仿真结果表明,所提出的方法在吞吐量、平均数据包交付率(PDR)、端到端延迟(E2ED)和平均路由开销方面都优于传统的 OLSR 算法。
{"title":"A new routing method based on ant colony optimization in vehicular ad-hoc network","authors":"Oussama Sbayti, Khalid Housni","doi":"10.19139/soic-2310-5070-1766","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1766","url":null,"abstract":"Vehicular Ad hoc Networks (VANETs) face significant challenges in providing high-quality service. These networks enable vehicles to exchange critical information, such as road obstacles and accidents, and support various communication modes known as Vehicle-to-Everything (V2X). This research paper proposes an intelligent method to improve the quality of service by optimizing path selection between vehicles, aiming to minimize network overhead and enhance routing efficiency. The proposed approach integrates Ant Colony Optimization (ACO) into the Optimized Link State Routing (OLSR) protocol. The effectiveness of this method is validated through implementation and simulation experiments conducted using the Simulation of Urban Mobility (SUMO) and the network simulator (NS3). Simulation results demonstrate that the proposed method outperforms the traditional OLSR algorithm in terms of throughput, average packet delivery rate (PDR), end-to-end delay (E2ED), and average routing overhead.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"32 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139279233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}