The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. Several methods have been proposed to relax the postulation of the independent criteria in the AHP, e.g., the Analytic Network Process (ANP). However, these methods usually need a number of pairwise comparison matrices (PCMs) and make it hard to apply to a complicated and large-scale problem. This paper presents a groundbreaking approach to address this issue by incorporating discrete Markov Random Fields (MRFs) into the AHP framework. Our method enhances decision making by effectively and sensibly capturing interdependencies among criteria, reflecting actual weights. Moreover, we showcase a numerical example to illustrate the proposed method and compare the results with the conventional AHP and Fuzzy Cognitive Map (FCM). The findings highlight our method’s ability to influence global priority values and the ranking of alternatives when considering interdependencies between criteria. These results suggest that the introduced method provides a flexible and adaptable framework for modeling interdependencies between criteria, ultimately leading to more accurate and reliable decision-making outcomes.
{"title":"Using Markov Random Field and Analytic Hierarchy Process to Account for Interdependent Criteria","authors":"Jih-Jeng Huang, Chin-Yi Chen","doi":"10.3390/a17010001","DOIUrl":"https://doi.org/10.3390/a17010001","url":null,"abstract":"The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. Several methods have been proposed to relax the postulation of the independent criteria in the AHP, e.g., the Analytic Network Process (ANP). However, these methods usually need a number of pairwise comparison matrices (PCMs) and make it hard to apply to a complicated and large-scale problem. This paper presents a groundbreaking approach to address this issue by incorporating discrete Markov Random Fields (MRFs) into the AHP framework. Our method enhances decision making by effectively and sensibly capturing interdependencies among criteria, reflecting actual weights. Moreover, we showcase a numerical example to illustrate the proposed method and compare the results with the conventional AHP and Fuzzy Cognitive Map (FCM). The findings highlight our method’s ability to influence global priority values and the ranking of alternatives when considering interdependencies between criteria. These results suggest that the introduced method provides a flexible and adaptable framework for modeling interdependencies between criteria, ultimately leading to more accurate and reliable decision-making outcomes.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":" November","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Angrisano, Giovanni Cappello, S. Gaglione, C. Gioia
Velocity estimation has a key role in several applications; for instance, velocity estimation in navigation or in mobile mapping systems and GNSSs is currently a common way to achieve reliable and accurate velocity. Two approaches are mainly used to obtain velocity based on GNSS measurements, i.e., Doppler observations and carrier phases differenced in time (that is, TDCP). In a benign environment, Doppler-based velocity can be estimated accurately to within a few cm/s, while TDCP-based velocity can be estimated accurately to within a few mm/s. On the other hand, the TDCP technique is more prone to availability shortage and the presence of blunders. In this work, the two mentioned approaches are tested, using three devices of different grades: a high-grade geodetic receiver, a high-sensitivity receiver, and a GNSS chip mounted on a smartphone. The measurements of geodetic receivers are inherently cleaner, providing an accurate solution, while the remaining two receivers provide worse results. The case of smartphone GNSS chips can be particularly critical owing to the equipped antenna, which makes the measurements noisy and largely affected by blunders. The GNSSs are considered separately in order to assess the performance of the single systems. The analysis carried out in this research confirms the previous considerations about receiver grades and processing techniques. Additionally, the obtained results highlight the necessity of adopting a diagnostic approach to the measurements, such as RAIM-FDE, especially for low-grade receivers.
{"title":"Velocity Estimation Using Time-Differenced Carrier Phase and Doppler Shift with Different Grades of Devices: From Smartphones to Professional Receivers","authors":"A. Angrisano, Giovanni Cappello, S. Gaglione, C. Gioia","doi":"10.3390/a17010002","DOIUrl":"https://doi.org/10.3390/a17010002","url":null,"abstract":"Velocity estimation has a key role in several applications; for instance, velocity estimation in navigation or in mobile mapping systems and GNSSs is currently a common way to achieve reliable and accurate velocity. Two approaches are mainly used to obtain velocity based on GNSS measurements, i.e., Doppler observations and carrier phases differenced in time (that is, TDCP). In a benign environment, Doppler-based velocity can be estimated accurately to within a few cm/s, while TDCP-based velocity can be estimated accurately to within a few mm/s. On the other hand, the TDCP technique is more prone to availability shortage and the presence of blunders. In this work, the two mentioned approaches are tested, using three devices of different grades: a high-grade geodetic receiver, a high-sensitivity receiver, and a GNSS chip mounted on a smartphone. The measurements of geodetic receivers are inherently cleaner, providing an accurate solution, while the remaining two receivers provide worse results. The case of smartphone GNSS chips can be particularly critical owing to the equipped antenna, which makes the measurements noisy and largely affected by blunders. The GNSSs are considered separately in order to assess the performance of the single systems. The analysis carried out in this research confirms the previous considerations about receiver grades and processing techniques. Additionally, the obtained results highlight the necessity of adopting a diagnostic approach to the measurements, such as RAIM-FDE, especially for low-grade receivers.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"114 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138959646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Wang, Yating Xu, Tianjian Li, Tao Zhang, Jian Zou
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring.
{"title":"Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm","authors":"Yi Wang, Yating Xu, Tianjian Li, Tao Zhang, Jian Zou","doi":"10.3390/a16120574","DOIUrl":"https://doi.org/10.3390/a16120574","url":null,"abstract":"Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"5 2","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138995564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cindy Trinh, Silvia Lasala, O. Herbinet, Dimitrios Meimaroglou
This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical characteristics in which the model can make predictions with a given reliability. This work studies the AD definition of a ML model throughout its development procedure: during data preprocessing, model construction and model deployment. Three AD definition methods, commonly used for outlier detection in high-dimensional problems, are compared: isolation forest (iForest), random forest prediction confidence (RF confidence) and k-nearest neighbors in the 2D projection of descriptor space obtained via t-distributed stochastic neighbor embedding (tSNE2D/kNN). These methods compute an anomaly score that can be used instead of the distance metrics of classical low-dimension AD definition methods, the latter being generally unsuitable for high-dimensional problems. Typically, in low- (high-) dimensional problems, a molecule is considered to lie within the AD if its distance from the training domain (anomaly score) is below a given threshold. During data preprocessing, the three AD definition methods are used to identify outlier molecules and the effect of their removal is investigated. A more significant improvement of model performance is observed when outliers identified with RF confidence are removed (e.g., for a removal of 30% of outliers, the MAE (Mean Absolute Error) of the test dataset is divided by 2.5, 1.6 and 1.1 for RF confidence, iForest and tSNE2D/kNN, respectively). While these three methods identify X-outliers, the effect of other types of outliers, namely Model-outliers and y-outliers, is also investigated. In particular, the elimination of X-outliers followed by that of Model-outliers enables us to divide MAE and RMSE (Root Mean Square Error) by 2 and 3, respectively, while reducing overfitting. The elimination of y-outliers does not display a significant effect on the model performance. During model construction and deployment, the AD serves to verify the position of the test data and of different categories of molecules with respect to the training data and associate this position with their prediction accuracy. For the data that are found to be close to the training data, according to RF confidence, and display high prediction errors, tSNE 2D representations are deployed to identify the possible sources of these errors (e.g., representation of the chemical information in the training data).
本文研究了在高维数据上训练的机器学习(ML)模型的适用域(AD),以通过描述符预测理想气体的形成焓和分子熵。AD至关重要,因为它描述了模型能以给定的可靠性进行预测的化学特征空间。这项工作研究了 ML 模型在整个开发过程中的 AD 定义:数据预处理、模型构建和模型部署。比较了常用于高维问题离群点检测的三种 AD 定义方法:隔离森林 (iForest)、随机森林预测置信度 (RF 置信度) 和通过 t 分布随机邻域嵌入 (tSNE2D/kNN) 获得的描述符空间 2D 投影中的 k 近邻。这些方法计算出的异常得分可用于替代经典低维 AD 定义方法的距离度量,后者通常不适合高维问题。通常情况下,在低(高)维问题中,如果分子与训练域的距离(异常得分)低于给定的阈值,则认为该分子位于 AD 范围内。在数据预处理过程中,使用三种 AD 定义方法来识别离群分子,并研究去除离群分子的效果。当移除用 RF 置信度识别出的异常值时,模型性能会有更明显的提高(例如,移除 30% 的异常值后,RF 置信度、iForest 和 tSNE2D/kNN 测试数据集的 MAE(平均绝对误差)分别除以 2.5、1.6 和 1.1)。在这三种方法识别 X 离群值的同时,还研究了其他类型离群值(即模型离群值和 y 离群值)的影响。特别是,先消除 X 离群值,再消除模型离群值,可使 MAE 和 RMSE(均方根误差)分别降低 2 和 3,同时减少过拟合。消除 y 离群值对模型性能的影响不大。在模型构建和部署过程中,AD 的作用是验证测试数据和不同类别分子相对于训练数据的位置,并将这一位置与其预测准确性联系起来。对于根据射频置信度发现与训练数据接近但预测误差较大的数据,则采用 tSNE 2D 表示法来识别这些误差的可能来源(例如,训练数据中化学信息的表示)。
{"title":"On the Development of Descriptor-Based Machine Learning Models for Thermodynamic Properties: Part 2—Applicability Domain and Outliers","authors":"Cindy Trinh, Silvia Lasala, O. Herbinet, Dimitrios Meimaroglou","doi":"10.3390/a16120573","DOIUrl":"https://doi.org/10.3390/a16120573","url":null,"abstract":"This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical characteristics in which the model can make predictions with a given reliability. This work studies the AD definition of a ML model throughout its development procedure: during data preprocessing, model construction and model deployment. Three AD definition methods, commonly used for outlier detection in high-dimensional problems, are compared: isolation forest (iForest), random forest prediction confidence (RF confidence) and k-nearest neighbors in the 2D projection of descriptor space obtained via t-distributed stochastic neighbor embedding (tSNE2D/kNN). These methods compute an anomaly score that can be used instead of the distance metrics of classical low-dimension AD definition methods, the latter being generally unsuitable for high-dimensional problems. Typically, in low- (high-) dimensional problems, a molecule is considered to lie within the AD if its distance from the training domain (anomaly score) is below a given threshold. During data preprocessing, the three AD definition methods are used to identify outlier molecules and the effect of their removal is investigated. A more significant improvement of model performance is observed when outliers identified with RF confidence are removed (e.g., for a removal of 30% of outliers, the MAE (Mean Absolute Error) of the test dataset is divided by 2.5, 1.6 and 1.1 for RF confidence, iForest and tSNE2D/kNN, respectively). While these three methods identify X-outliers, the effect of other types of outliers, namely Model-outliers and y-outliers, is also investigated. In particular, the elimination of X-outliers followed by that of Model-outliers enables us to divide MAE and RMSE (Root Mean Square Error) by 2 and 3, respectively, while reducing overfitting. The elimination of y-outliers does not display a significant effect on the model performance. During model construction and deployment, the AD serves to verify the position of the test data and of different categories of molecules with respect to the training data and associate this position with their prediction accuracy. For the data that are found to be close to the training data, according to RF confidence, and display high prediction errors, tSNE 2D representations are deployed to identify the possible sources of these errors (e.g., representation of the chemical information in the training data).","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"41 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138965324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local refiner of the centroids, and it is unable to move centroids all over the data space. Random Swap was defined to go beyond K-Means, and its modus operandi integrates K-Means in a global strategy of centroids management, which can often generate a clustering solution close to the global optimum. This paper proposes an approach which extends both K-Means and Random Swap and improves the clustering accuracy through an evolutionary technique and careful seeding. Two new algorithms are proposed: the Population-Based K-Means (PB-KM) and the Population-Based Random Swap (PB-RS). Both algorithms consist of two steps: first, a population of J candidate solutions is built, and then the candidate centroids are repeatedly recombined toward a final accurate solution. The paper motivates the design of PB-KM and PB-RS, outlines their current implementation in Java based on parallel streams, and demonstrates the achievable clustering accuracy using both synthetic and real-world datasets.
{"title":"Improving Clustering Accuracy of K-Means and Random Swap by an Evolutionary Technique Based on Careful Seeding","authors":"L. Nigro, F. Cicirelli","doi":"10.3390/a16120572","DOIUrl":"https://doi.org/10.3390/a16120572","url":null,"abstract":"K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local refiner of the centroids, and it is unable to move centroids all over the data space. Random Swap was defined to go beyond K-Means, and its modus operandi integrates K-Means in a global strategy of centroids management, which can often generate a clustering solution close to the global optimum. This paper proposes an approach which extends both K-Means and Random Swap and improves the clustering accuracy through an evolutionary technique and careful seeding. Two new algorithms are proposed: the Population-Based K-Means (PB-KM) and the Population-Based Random Swap (PB-RS). Both algorithms consist of two steps: first, a population of J candidate solutions is built, and then the candidate centroids are repeatedly recombined toward a final accurate solution. The paper motivates the design of PB-KM and PB-RS, outlines their current implementation in Java based on parallel streams, and demonstrates the achievable clustering accuracy using both synthetic and real-world datasets.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"6 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138966533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible for the execution of the program in a given environment. In this work, we study the possibility that instead of programming synthesized bacteria through a custom-designed genetic circuit, it is the circuit itself which emerges as a result of the evolution simulated through an evolutionary algorithm. This study is conducted by performing in silico experiments in a community composed of synthetic bacteria in which one species or strain behaves as pathogenic bacteria against the rest of the non-pathogenic bacteria that are also part of the bacterial consortium. The goal is the eradication of the pathogenic strain through the evolutionary programming of the agents or synthetic bacteria. The results obtained suggest the plausibility of the evolutionary design of the appropriate genetic circuit resulting from the application of a bottom-up strategy and therefore the experimental feasibility of the evolutionary programming of synthetic bacteria.
{"title":"Evolutionary Algorithms in a Bacterial Consortium of Synthetic Bacteria","authors":"Sara Lledó Villaescusa, Rafael Lahoz-Beltra","doi":"10.3390/a16120571","DOIUrl":"https://doi.org/10.3390/a16120571","url":null,"abstract":"At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible for the execution of the program in a given environment. In this work, we study the possibility that instead of programming synthesized bacteria through a custom-designed genetic circuit, it is the circuit itself which emerges as a result of the evolution simulated through an evolutionary algorithm. This study is conducted by performing in silico experiments in a community composed of synthetic bacteria in which one species or strain behaves as pathogenic bacteria against the rest of the non-pathogenic bacteria that are also part of the bacterial consortium. The goal is the eradication of the pathogenic strain through the evolutionary programming of the agents or synthetic bacteria. The results obtained suggest the plausibility of the evolutionary design of the appropriate genetic circuit resulting from the application of a bottom-up strategy and therefore the experimental feasibility of the evolutionary programming of synthetic bacteria.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"9 4","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138966227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angel A. Juan, Markus Rabe, Majsa Ammouriova, Javier Panadero, David Peidro, Daniel Riera
In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization problems involving both stochastic and fuzzy uncertainties. The hybrid approach combines simulation, metaheuristics, and fuzzy logic, offering a feasible methodology to solve large-scale NP-hard problems under general uncertainty scenarios. These scenarios are commonly encountered in L&T optimization challenges, such as the vehicle routing problem or the team orienteering problem, among many others. The proposed methodology allows for modeling various problem components—including travel times, service times, customers’ demands, or the duration of electric batteries—as deterministic, stochastic, or fuzzy items. A cross-problem analysis of several computational experiments is conducted to validate the effectiveness of the fuzzy simheuristic methodology. Being a flexible methodology that allows us to tackle NP-hard challenges under general uncertainty scenarios, fuzzy simheuristics can also be applied in fields other than L&T.
{"title":"Solving NP-Hard Challenges in Logistics and Transportation under General Uncertainty Scenarios Using Fuzzy Simheuristics","authors":"Angel A. Juan, Markus Rabe, Majsa Ammouriova, Javier Panadero, David Peidro, Daniel Riera","doi":"10.3390/a16120570","DOIUrl":"https://doi.org/10.3390/a16120570","url":null,"abstract":"In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization problems involving both stochastic and fuzzy uncertainties. The hybrid approach combines simulation, metaheuristics, and fuzzy logic, offering a feasible methodology to solve large-scale NP-hard problems under general uncertainty scenarios. These scenarios are commonly encountered in L&T optimization challenges, such as the vehicle routing problem or the team orienteering problem, among many others. The proposed methodology allows for modeling various problem components—including travel times, service times, customers’ demands, or the duration of electric batteries—as deterministic, stochastic, or fuzzy items. A cross-problem analysis of several computational experiments is conducted to validate the effectiveness of the fuzzy simheuristic methodology. Being a flexible methodology that allows us to tackle NP-hard challenges under general uncertainty scenarios, fuzzy simheuristics can also be applied in fields other than L&T.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"61 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138967831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Athina Daniilidou, A. Konguetsof, Georgios Souliotis, Basil Papadopoulos
In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a number of axioms, and the conditions of satisfying the maximum number of axioms are denoted. New theorems are stated and proven based on the rule that the fuzzy function of fuzzy implication, which is strong, leads to fuzzy negation. In this work, the data taken were fuzzified for the application of the new formulae. The fuzzification of the data was undertaken using four kinds of membership degree functions. The new fuzzy functions were compared based on the results obtained after a number of repetitions. The new proposed methodology presents a new family of fuzzy implications, and also an algorithm is shown that produces fuzzy implications so as to be able to select the optimal method of the generator according to the value of a free parameter.
{"title":"Generator of Fuzzy Implications","authors":"Athina Daniilidou, A. Konguetsof, Georgios Souliotis, Basil Papadopoulos","doi":"10.3390/a16120569","DOIUrl":"https://doi.org/10.3390/a16120569","url":null,"abstract":"In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a number of axioms, and the conditions of satisfying the maximum number of axioms are denoted. New theorems are stated and proven based on the rule that the fuzzy function of fuzzy implication, which is strong, leads to fuzzy negation. In this work, the data taken were fuzzified for the application of the new formulae. The fuzzification of the data was undertaken using four kinds of membership degree functions. The new fuzzy functions were compared based on the results obtained after a number of repetitions. The new proposed methodology presents a new family of fuzzy implications, and also an algorithm is shown that produces fuzzy implications so as to be able to select the optimal method of the generator according to the value of a free parameter.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"13 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139000374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Makaram, Sarvagya Gupta, M. Pesce, J. Bolton, Scellig Stone, Daniel Haehn, Marc Pomplun, Christos Papadelis, Phillip L Pearl, Alexander Rotenberg, P. E. Grant, Eleonora Tamilia
In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the human eye. Here, we propose a deep learning-based metric of visual complexity to interpret TF images extracted from iEEG data and aim to assess its ability to identify the EZ in the brain. We analyzed interictal iEEG data from 1928 contacts recorded from 20 children with drug-resistant epilepsy who became seizure-free after neurosurgery. We localized each iEEG contact in the MRI, created TF images (1–70 Hz) for each contact, and used a pre-trained VGG16 network to measure their visual complexity by extracting unsupervised activation energy (UAE) from 13 convolutional layers. We identified points of interest in the brain using the UAE values via patient- and layer-specific thresholds (based on extreme value distribution) and using a support vector machine classifier. Results show that contacts inside the seizure onset zone exhibit lower UAE than outside, with larger differences in deep layers (L10, L12, and L13: p < 0.001). Furthermore, the points of interest identified using the support vector machine, localized the EZ with 7 mm accuracy. In conclusion, we presented a pre-surgical computerized tool that facilitates the EZ localization in the patient’s MRI without requiring long-term iEEG inspection.
{"title":"Deep Learning-Based Visual Complexity Analysis of Electroencephalography Time-Frequency Images: Can It Localize the Epileptogenic Zone in the Brain?","authors":"N. Makaram, Sarvagya Gupta, M. Pesce, J. Bolton, Scellig Stone, Daniel Haehn, Marc Pomplun, Christos Papadelis, Phillip L Pearl, Alexander Rotenberg, P. E. Grant, Eleonora Tamilia","doi":"10.3390/a16120567","DOIUrl":"https://doi.org/10.3390/a16120567","url":null,"abstract":"In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the human eye. Here, we propose a deep learning-based metric of visual complexity to interpret TF images extracted from iEEG data and aim to assess its ability to identify the EZ in the brain. We analyzed interictal iEEG data from 1928 contacts recorded from 20 children with drug-resistant epilepsy who became seizure-free after neurosurgery. We localized each iEEG contact in the MRI, created TF images (1–70 Hz) for each contact, and used a pre-trained VGG16 network to measure their visual complexity by extracting unsupervised activation energy (UAE) from 13 convolutional layers. We identified points of interest in the brain using the UAE values via patient- and layer-specific thresholds (based on extreme value distribution) and using a support vector machine classifier. Results show that contacts inside the seizure onset zone exhibit lower UAE than outside, with larger differences in deep layers (L10, L12, and L13: p < 0.001). Furthermore, the points of interest identified using the support vector machine, localized the EZ with 7 mm accuracy. In conclusion, we presented a pre-surgical computerized tool that facilitates the EZ localization in the patient’s MRI without requiring long-term iEEG inspection.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"6 3","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138997990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crack inspection in railway sleepers is crucial for ensuring rail safety and avoiding deadly accidents. Traditional methods for detecting cracks on railway sleepers are very time-consuming and lack efficiency. Therefore, nowadays, researchers are paying attention to vision-based algorithms, especially Deep Learning algorithms. In this work, we adopted the U-net for the first time for detecting cracks on a railway sleeper and proposed a modified U-net architecture named Dense U-net for segmenting the cracks. In the Dense U-net structure, we established several short connections between the encoder and decoder blocks, which enabled the architecture to obtain better pixel information flow. Thus, the model extracted the necessary information in more detail to predict the cracks. We collected images from railway sleepers, processed them in a dataset, and finally trained the model with the images. The model achieved an overall F1-score, precision, Recall, and IoU of 86.5%, 88.53%, 84.63%, and 76.31%, respectively. We compared our suggested model with the original U-net, and the results demonstrate that our model performed better than the U-net in both quantitative and qualitative results. Moreover, we considered the necessity of crack severity analysis and measured a few parameters of the cracks. The engineers must know the severity of the cracks to have an idea about the most severe locations and take the necessary steps to repair the badly affected sleepers.
{"title":"Vision-Based Concrete-Crack Detection on Railway Sleepers Using Dense U-Net Model","authors":"M. Khan, Seong-Hoon Kee, A. Nahid","doi":"10.3390/a16120568","DOIUrl":"https://doi.org/10.3390/a16120568","url":null,"abstract":"Crack inspection in railway sleepers is crucial for ensuring rail safety and avoiding deadly accidents. Traditional methods for detecting cracks on railway sleepers are very time-consuming and lack efficiency. Therefore, nowadays, researchers are paying attention to vision-based algorithms, especially Deep Learning algorithms. In this work, we adopted the U-net for the first time for detecting cracks on a railway sleeper and proposed a modified U-net architecture named Dense U-net for segmenting the cracks. In the Dense U-net structure, we established several short connections between the encoder and decoder blocks, which enabled the architecture to obtain better pixel information flow. Thus, the model extracted the necessary information in more detail to predict the cracks. We collected images from railway sleepers, processed them in a dataset, and finally trained the model with the images. The model achieved an overall F1-score, precision, Recall, and IoU of 86.5%, 88.53%, 84.63%, and 76.31%, respectively. We compared our suggested model with the original U-net, and the results demonstrate that our model performed better than the U-net in both quantitative and qualitative results. Moreover, we considered the necessity of crack severity analysis and measured a few parameters of the cracks. The engineers must know the severity of the cracks to have an idea about the most severe locations and take the necessary steps to repair the badly affected sleepers.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"14 17","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139001298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}