Pub Date : 2024-08-07DOI: 10.1007/s00500-024-09829-2
Ranjeet Kaur, Alka Tripathi
The computing models such as crisp automata, fuzzy automata and general fuzzy automata (GFA) are used to represent complex systems for predefined input alphabets or symbols. A framework that can process words rather than symbols is needed to simulate applications based on the natural language. Semantic computing (SC) offers a technique to accommodate semantically similar words instead of predefined words, thus extends the applicability and flexibility of GFA. In present work, a hybrid model of GFA and SC is proposed to deal with a situation where input can be user-dependent or related to words that have semantically similar meanings. In traditional theory of automata, if input symbols are changed one must define a new automata, whereas in the proposed work instead of defining a new GFA, existing GFA can process the semantically similar external words. An application related to transportation e-service is further discussed to understand the enhanced flexibility and applicability of the proposed models.
{"title":"Hybrid model of general fuzzy automata and semantic computing: an application to transportation e-service","authors":"Ranjeet Kaur, Alka Tripathi","doi":"10.1007/s00500-024-09829-2","DOIUrl":"https://doi.org/10.1007/s00500-024-09829-2","url":null,"abstract":"<p>The computing models such as crisp automata, fuzzy automata and general fuzzy automata (GFA) are used to represent complex systems for predefined input alphabets or symbols. A framework that can process words rather than symbols is needed to simulate applications based on the natural language. Semantic computing (SC) offers a technique to accommodate semantically similar words instead of predefined words, thus extends the applicability and flexibility of GFA. In present work, a hybrid model of GFA and SC is proposed to deal with a situation where input can be user-dependent or related to words that have semantically similar meanings. In traditional theory of automata, if input symbols are changed one must define a new automata, whereas in the proposed work instead of defining a new GFA, existing GFA can process the semantically similar external words. An application related to transportation e-service is further discussed to understand the enhanced flexibility and applicability of the proposed models.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"51 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s00500-024-09874-x
Wenjun Liu, Huan Guo, Jiaxin Gan, Hai Wang, Hailan Wang, Chao Zhang, Qingcheng Peng, Yuyan Sun, Bao Yu, Mengshu Hou, Bo Li, Xiaolei Li
Topic detection is an information processing technology designed to help people deal with the growing problem of data information on the Internet. In the research literature, topic detection methods are used for topic classification through word embedding, supervised-based and unsupervised-based approaches. However, most methods for topic detection only address the problem of clustering and do not focus on the problem of topic detection accuracy reduction due to the cohesiveness of topics. Also, the sequence of biterm during topic detection can cause substantial deviations in the detected topic content. To solve the above problems, this paper proposes a topic detection method based on KM-LSH fusion algorithm and improved BTM model. KM-LSH fusion algorithm is a fusion algorithm that combines K-means clustering and LSH refinement clustering. The proposed method can solve the problem of cohesiveness of topic detection, and the improved BTM model can solve the influence of the sequence of biterm on topic detection. First, the text vector is constructed by processing the collected set of microblog texts using text preprocessing methods. Secondly, the KM-LSH fusion algorithm is used to calculate text similarity and perform topic clustering and refinement. Finally, the improved BTM model is used to model the texts, which is combined with the word position and the improved TF-IDF weight calculation algorithm to adjust the microblogging texts in clustering. The experiment results indicate that the proposed KM-LSH-IBTM method improves the evaluation indexes compared with the other three topic detection methods. In conclusion, the proposed KM-LSH-IBTM method promotes the processing capability of topic detection in terms of cohesiveness and the sequence of biterm.
{"title":"A topic detection method based on KM-LSH Fusion algorithm and improved BTM model","authors":"Wenjun Liu, Huan Guo, Jiaxin Gan, Hai Wang, Hailan Wang, Chao Zhang, Qingcheng Peng, Yuyan Sun, Bao Yu, Mengshu Hou, Bo Li, Xiaolei Li","doi":"10.1007/s00500-024-09874-x","DOIUrl":"https://doi.org/10.1007/s00500-024-09874-x","url":null,"abstract":"<p>Topic detection is an information processing technology designed to help people deal with the growing problem of data information on the Internet. In the research literature, topic detection methods are used for topic classification through word embedding, supervised-based and unsupervised-based approaches. However, most methods for topic detection only address the problem of clustering and do not focus on the problem of topic detection accuracy reduction due to the cohesiveness of topics. Also, the sequence of biterm during topic detection can cause substantial deviations in the detected topic content. To solve the above problems, this paper proposes a topic detection method based on KM-LSH fusion algorithm and improved BTM model. KM-LSH fusion algorithm is a fusion algorithm that combines K-means clustering and LSH refinement clustering. The proposed method can solve the problem of cohesiveness of topic detection, and the improved BTM model can solve the influence of the sequence of biterm on topic detection. First, the text vector is constructed by processing the collected set of microblog texts using text preprocessing methods. Secondly, the KM-LSH fusion algorithm is used to calculate text similarity and perform topic clustering and refinement. Finally, the improved BTM model is used to model the texts, which is combined with the word position and the improved TF-IDF weight calculation algorithm to adjust the microblogging texts in clustering. The experiment results indicate that the proposed KM-LSH-IBTM method improves the evaluation indexes compared with the other three topic detection methods. In conclusion, the proposed KM-LSH-IBTM method promotes the processing capability of topic detection in terms of cohesiveness and the sequence of biterm.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"11 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s00500-024-09830-9
Bala Venkateswarlu Isunuri, Jagadeesh Kakarla
The classification of brain tumor images is the prevalent task in computer-aided brain tumor diagnosis. Recently, three-class classification has become a superlative task in brain tumor type classification. The existing models are fine-tuned for a single dataset, and hence, they may exhibit displeasing results on other datasets. Thus, there is a need for a generalized model that can produce superior performance on multiple datasets. In this paper, we have presented a generalized model that produces similar results on two datasets. We have proposed an EfficientNet and Mixed Convolution Network model to perform a three-class brain tumor type classification. We have devised a mixed convolution network to enhance the feature vector extracted from pre-trained EfficientNet. The proposed network consists of two blocks, namely, separable convolution and residual convolution. We have utilized a Gaussian dropout layer before the softmax layer to avoid model overfitting. In our experiments, two publicly available datasets (BTDS and CPM) are considered for the evaluation of the proposed model. The BTDS dataset has been segregated into three tumor types: Meningioma, Glioma, and Pituitary. The CPM dataset has been divided into three glioma subtypes: Glioblastoma, Oligodendroglioma, and Astrocytoma. We have achieved an accuracy of 98.04% and 96.00% on BTDS and CPM datasets, respectively. The proposed model outperforms existing pre-trained models and state-of-the-art models in vital metrics.
{"title":"EfficientNet and mixed convolution network for three-class brain tumor magnetic resonance image classification","authors":"Bala Venkateswarlu Isunuri, Jagadeesh Kakarla","doi":"10.1007/s00500-024-09830-9","DOIUrl":"https://doi.org/10.1007/s00500-024-09830-9","url":null,"abstract":"<p>The classification of brain tumor images is the prevalent task in computer-aided brain tumor diagnosis. Recently, three-class classification has become a superlative task in brain tumor type classification. The existing models are fine-tuned for a single dataset, and hence, they may exhibit displeasing results on other datasets. Thus, there is a need for a generalized model that can produce superior performance on multiple datasets. In this paper, we have presented a generalized model that produces similar results on two datasets. We have proposed an EfficientNet and Mixed Convolution Network model to perform a three-class brain tumor type classification. We have devised a mixed convolution network to enhance the feature vector extracted from pre-trained EfficientNet. The proposed network consists of two blocks, namely, separable convolution and residual convolution. We have utilized a Gaussian dropout layer before the softmax layer to avoid model overfitting. In our experiments, two publicly available datasets (BTDS and CPM) are considered for the evaluation of the proposed model. The BTDS dataset has been segregated into three tumor types: Meningioma, Glioma, and Pituitary. The CPM dataset has been divided into three glioma subtypes: Glioblastoma, Oligodendroglioma, and Astrocytoma. We have achieved an accuracy of 98.04% and 96.00% on BTDS and CPM datasets, respectively. The proposed model outperforms existing pre-trained models and state-of-the-art models in vital metrics.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"22 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supply or service networks are vulnerable to hazards that can stem from both unintentional and intentional human actions, as well as natural calamities. To ensure vital infrastructure resilience in these networks, address the interdiction facility location problem. Currently, diverse groups of attackers target supply and service systems to cause maximum disruption. Collaboration among attackers improves system vulnerability detection accuracy and realism. This research examines the challenges of interdiction location with different defense systems and heterogeneous attackers. To address this challenge, a mixed-integer non-linear bi-level programming model was considered. Heuristic optimization methods including variable neighborhood search, simulated annealing, and hybrid variable neighborhood search are used to efficiently solve the suggested model. The outcomes of our investigation suggest that implementing various protective methods leads to an escalation in system damage when attackers collaborate. Furthermore, the findings illustrate the efficacy of the suggested algorithms in resolving interdiction location issues within supply or service networks.
{"title":"A bi-level model and heuristic techniques with various neighborhood strategies for covering interdiction problem with fortification","authors":"Abdolsalam Ghaderi, Zahra Hosseinzadeh Bandbon, Anwar Mahmoodi","doi":"10.1007/s00500-024-09842-5","DOIUrl":"https://doi.org/10.1007/s00500-024-09842-5","url":null,"abstract":"<p>Supply or service networks are vulnerable to hazards that can stem from both unintentional and intentional human actions, as well as natural calamities. To ensure vital infrastructure resilience in these networks, address the interdiction facility location problem. Currently, diverse groups of attackers target supply and service systems to cause maximum disruption. Collaboration among attackers improves system vulnerability detection accuracy and realism. This research examines the challenges of interdiction location with different defense systems and heterogeneous attackers. To address this challenge, a mixed-integer non-linear bi-level programming model was considered. Heuristic optimization methods including variable neighborhood search, simulated annealing, and hybrid variable neighborhood search are used to efficiently solve the suggested model. The outcomes of our investigation suggest that implementing various protective methods leads to an escalation in system damage when attackers collaborate. Furthermore, the findings illustrate the efficacy of the suggested algorithms in resolving interdiction location issues within supply or service networks.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"318 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s00500-024-09811-y
Hadi Fattahi, Hossein Ghaedi
In today's context, due to the extensive construction projects, there is a surging demand for building stones. Within quarry-based processing facilities, a pivotal aspect influencing the production of these building stones pertains to evaluating the performance of band saw machines, particularly concerning the cutting of these stones. In this context, Maximum Electric Current (MEC) emerges as a critical variable. To identify this crucial factor, it necessitates a comprehensive grasp of the inherent properties of the stone since it profoundly influences costs, equipment depreciation, and production rates. Estimating the MEC poses numerous challenges and complications due to the uncertainty inherent in geological and geotechnical parameters at each point. Conventional and traditional methods, such as numerical, experimental, analytical, and regression methods, have limitations as they often overlook the uncertainty in rock parameters, leading to the construction of simplistic and non-linear models with simplified assumptions in analytical methods that may lack high accuracy. Consequently, this article employs intelligent methods to overcome these challenges and achieve an optimal solution with high accuracy. Using intelligent methods, it becomes possible to create complex and non-linear models efficiently, minimizing both time and cost. Consequently, this study addresses these challenges by employing two optimization algorithms: Artificial Bee Colony (ABC) and Sine Cosine (SC) Algorithms to estimate MEC specifically in quarry operations. In pursuit of this objective, 120 test samples drawn from 12 distinct types of carbonate rocks obtained from a marble factory in the Mahalat region of Iran were utilized. The considered input parameters encompassed Young's modulus, Mohs hardness, uniaxial compressive strength (UCS), production rate and F-Schimazek abrasion factors. The dataset was partitioned, allocating 80% (70 data points) for model development and reserving 20% (18 data points) for model validation. The analysis of modeling outcomes involved three statistical criteria: squared correlation coefficient, mean square error, and root mean square error. The results revealed that the developed model demonstrates a high level of accuracy and minimal error, closely approximating real values. Hence, it can serve as a valuable tool for engineers engaged in the field of rock engineering. In a final step, to assess sensitivity and evaluate the model's output, the @RISK software was employed. The analyses unveiled that among the input parameters within the quarry context, UCS exerts the most substantial influence on the model's output. Even slight variations in UCS can lead to significant alterations in MEC within quarry operations.
{"title":"Optimizing building stone-cutting in quarries: a study on estimation of maximum electric current using ABC and SC algorithms","authors":"Hadi Fattahi, Hossein Ghaedi","doi":"10.1007/s00500-024-09811-y","DOIUrl":"https://doi.org/10.1007/s00500-024-09811-y","url":null,"abstract":"<p>In today's context, due to the extensive construction projects, there is a surging demand for building stones. Within quarry-based processing facilities, a pivotal aspect influencing the production of these building stones pertains to evaluating the performance of band saw machines, particularly concerning the cutting of these stones. In this context, Maximum Electric Current (MEC) emerges as a critical variable. To identify this crucial factor, it necessitates a comprehensive grasp of the inherent properties of the stone since it profoundly influences costs, equipment depreciation, and production rates. Estimating the MEC poses numerous challenges and complications due to the uncertainty inherent in geological and geotechnical parameters at each point. Conventional and traditional methods, such as numerical, experimental, analytical, and regression methods, have limitations as they often overlook the uncertainty in rock parameters, leading to the construction of simplistic and non-linear models with simplified assumptions in analytical methods that may lack high accuracy. Consequently, this article employs intelligent methods to overcome these challenges and achieve an optimal solution with high accuracy. Using intelligent methods, it becomes possible to create complex and non-linear models efficiently, minimizing both time and cost. Consequently, this study addresses these challenges by employing two optimization algorithms: Artificial Bee Colony (ABC) and Sine Cosine (SC) Algorithms to estimate MEC specifically in quarry operations. In pursuit of this objective, 120 test samples drawn from 12 distinct types of carbonate rocks obtained from a marble factory in the Mahalat region of Iran were utilized. The considered input parameters encompassed Young's modulus, Mohs hardness, uniaxial compressive strength (<i>UCS</i>), production rate and F-Schimazek abrasion factors. The dataset was partitioned, allocating 80% (70 data points) for model development and reserving 20% (18 data points) for model validation. The analysis of modeling outcomes involved three statistical criteria: squared correlation coefficient, mean square error, and root mean square error. The results revealed that the developed model demonstrates a high level of accuracy and minimal error, closely approximating real values. Hence, it can serve as a valuable tool for engineers engaged in the field of rock engineering. In a final step, to assess sensitivity and evaluate the model's output, the @RISK software was employed. The analyses unveiled that among the input parameters within the quarry context, UCS exerts the most substantial influence on the model's output. Even slight variations in UCS can lead to significant alterations in MEC within quarry operations.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"17 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141969868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s00500-024-09854-1
Pankaj Kakati, Bijan Davvaz
In real-life decision-making, expressing uncertainty, impreciseness, and hesitancy accurately is essential. Interval-valued spherical fuzzy sets (IVSFS) offer a suitable framework as an extension of interval-valued intuitionistic fuzzy sets, interval-valued picture fuzzy sets, and spherical fuzzy sets, allowing for interval-valued membership grades rather than exact values. This enhanced expressiveness enables more effective modeling of real-life decision-making problems by introducing suitable aggregation operators. In this paper, we propose the interval-valued spherical fuzzy Frank Choquet integral (IVSFFCI) and the interval-valued spherical fuzzy Frank geometric Choquet integral (IVSFFGCI) operators. These operators effectively capture the interaction among the criteria in real-life decision-making problems, overcoming the limitations of traditional methods. The IVSFFCI and IVSFFGCI operators utilize Frank’s t-norm and t-conorm, providing flexibility and robustness during the aggregation process. By considering the interrelation among the criteria, they exceed existing operators, making them the ideal choice for real-life decision-making situations. We develop a multicriteria decision-making (MCDM) method using the proposed operators that effectively deal with correlated criteria in real-life decision-making problems. To demonstrate the efficacy of the proposed method, an illustrative example relating to a financial body’s investment partner selection from four potential alternatives, based on criteria such as financial strength, mercantile expertise, entrepreneurial competencies, and risk management, is presented. The proposed method encapsulates immense potential across industries, promoting informed and data-driven decision-making processes.
在现实决策中,准确表达不确定性、不精确性和犹豫不决性至关重要。区间值球形模糊集(IVSFS)作为区间值直观模糊集、区间值图像模糊集和球形模糊集的扩展,提供了一个合适的框架,允许使用区间值成员等级而不是精确值。通过引入合适的聚合算子,这种增强的表达能力可以更有效地模拟现实生活中的决策问题。在本文中,我们提出了区间值球形模糊弗兰克-乔凯积分(IVSFFCI)和区间值球形模糊弗兰克-几何乔凯积分(IVSFFGCI)算子。这些算子有效地捕捉了现实决策问题中标准之间的相互作用,克服了传统方法的局限性。IVSFFCI 和 IVSFFGCI 算子利用了 Frank 的 t-norm 和 t-conorm,在聚合过程中提供了灵活性和稳健性。通过考虑标准之间的相互关系,它们超越了现有的算子,成为现实决策情况下的理想选择。我们利用所提出的算子开发了一种多标准决策(MCDM)方法,可有效处理现实决策问题中的相关标准。为了证明所提方法的有效性,我们举了一个例子,说明金融机构如何根据财务实力、商业专长、创业能力和风险管理等标准,从四个潜在备选方案中选择投资合作伙伴。拟议的方法在各行各业都具有巨大的潜力,可促进知情和数据驱动的决策过程。
{"title":"Some interval-valued spherical fuzzy Frank Choquet integral operators in multicriteria decision making","authors":"Pankaj Kakati, Bijan Davvaz","doi":"10.1007/s00500-024-09854-1","DOIUrl":"https://doi.org/10.1007/s00500-024-09854-1","url":null,"abstract":"<p>In real-life decision-making, expressing uncertainty, impreciseness, and hesitancy accurately is essential. Interval-valued spherical fuzzy sets (IVSFS) offer a suitable framework as an extension of interval-valued intuitionistic fuzzy sets, interval-valued picture fuzzy sets, and spherical fuzzy sets, allowing for interval-valued membership grades rather than exact values. This enhanced expressiveness enables more effective modeling of real-life decision-making problems by introducing suitable aggregation operators. In this paper, we propose the interval-valued spherical fuzzy Frank Choquet integral (IVSFFCI) and the interval-valued spherical fuzzy Frank geometric Choquet integral (IVSFFGCI) operators. These operators effectively capture the interaction among the criteria in real-life decision-making problems, overcoming the limitations of traditional methods. The IVSFFCI and IVSFFGCI operators utilize Frank’s <i>t</i>-norm and <i>t</i>-conorm, providing flexibility and robustness during the aggregation process. By considering the interrelation among the criteria, they exceed existing operators, making them the ideal choice for real-life decision-making situations. We develop a multicriteria decision-making (MCDM) method using the proposed operators that effectively deal with correlated criteria in real-life decision-making problems. To demonstrate the efficacy of the proposed method, an illustrative example relating to a financial body’s investment partner selection from four potential alternatives, based on criteria such as financial strength, mercantile expertise, entrepreneurial competencies, and risk management, is presented. The proposed method encapsulates immense potential across industries, promoting informed and data-driven decision-making processes.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"30 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s00500-024-09940-4
Nikolai Krivulin
We consider a discrete best approximation problem formulated in the framework of tropical algebra, which deals with the theory and applications of algebraic systems with idempotent operations. Given a set of samples of input and output of an unknown function, the problem is to construct a generalized tropical Puiseux polynomial that best approximates the function in the sense of a tropical distance function. The construction of an approximate polynomial involves the evaluation of both unknown coefficient and exponent of each monomial in the polynomial. To solve the approximation problem, we first reduce the problem to an equation in unknown vector of coefficients, which is given by a matrix with entries parameterized by unknown exponents. We derive a best approximate solution of the equation, which yields both vector of coefficients and approximation error parameterized by the exponents. Optimal values of exponents are found by minimization of the approximation error, which is transformed into minimization of a function of exponents over all partitions of a finite set. We solve this minimization problem in terms of max-plus algebra (where addition is defined as maximum and multiplication as arithmetic addition) by using a computational procedure based on the agglomerative clustering technique. This solution is extended to the minimization problem of finding optimal exponents in the polynomial in terms of max-algebra (where addition is defined as maximum). The results obtained are applied to develop new solutions for conventional problems of discrete best Chebyshev approximation of real functions by piecewise linear functions and piecewise Puiseux polynomials. We discuss computational complexity of the proposed solution and estimate upper bounds on the computational time. We demonstrate examples of approximation problems solved in terms of max-plus and max-algebra, and give graphical illustrations.
{"title":"On solution of tropical discrete best approximation problems","authors":"Nikolai Krivulin","doi":"10.1007/s00500-024-09940-4","DOIUrl":"https://doi.org/10.1007/s00500-024-09940-4","url":null,"abstract":"<p>We consider a discrete best approximation problem formulated in the framework of tropical algebra, which deals with the theory and applications of algebraic systems with idempotent operations. Given a set of samples of input and output of an unknown function, the problem is to construct a generalized tropical Puiseux polynomial that best approximates the function in the sense of a tropical distance function. The construction of an approximate polynomial involves the evaluation of both unknown coefficient and exponent of each monomial in the polynomial. To solve the approximation problem, we first reduce the problem to an equation in unknown vector of coefficients, which is given by a matrix with entries parameterized by unknown exponents. We derive a best approximate solution of the equation, which yields both vector of coefficients and approximation error parameterized by the exponents. Optimal values of exponents are found by minimization of the approximation error, which is transformed into minimization of a function of exponents over all partitions of a finite set. We solve this minimization problem in terms of max-plus algebra (where addition is defined as maximum and multiplication as arithmetic addition) by using a computational procedure based on the agglomerative clustering technique. This solution is extended to the minimization problem of finding optimal exponents in the polynomial in terms of max-algebra (where addition is defined as maximum). The results obtained are applied to develop new solutions for conventional problems of discrete best Chebyshev approximation of real functions by piecewise linear functions and piecewise Puiseux polynomials. We discuss computational complexity of the proposed solution and estimate upper bounds on the computational time. We demonstrate examples of approximation problems solved in terms of max-plus and max-algebra, and give graphical illustrations.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"68 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s00500-024-09853-2
R. Banupriya, R. Nagarajan, S. Muthubalaji
Unbalanced loads and high neutral currents on low voltage networks which frequently use three-phase, four wire systems with no larger conductors must need to be addressed. To overcome the loads and currents in low-voltage networks, an hybrid method is proposed in this manuscript for improving the networks of low-voltage using three-phase four-wire systems. The AOA-RERNN technique is the integration of the Archimedean-Optimization-Algorithm (AOA) and Recalling-Enhanced-Recurrent-Neural-Network (RERNN) technique to mitigate the issues, like neutral voltage offset, and harmonics, and neutral-to-ground voltage raise. At the point-of-common-coupling (PCC), the integration of Archimedean-optimization-algorithm and Recalling-enhanced-recurrent-neural-network approach is used to overcome the above mentioned issues. This strategy involves optimizing converter parameters with AOA and addressing system imbalances with RERNN, including mid-high line current, phase disparities, and neutral line compensation. Also, implementing control-based compensation reduces neutral current without requiring large neutral conductors. The proposed model is done in MATLAB. By this, the proposed approach achieves an impressive efficiency of 97.54%. But, the existing methods, like Artificial Transgender Long corn Algorithm (ATLA), Combined Adaptive Grasshopper Optimization Algorithm and Artificial Neural Network (AGONN), And Proportional Integral (PI) attain the efficiency of 80.23%, 77.26%, and 82.13%, respectively. The outcome of the simulation indicates that the proposed technique provides better findings than the present methods. Finally, this study demonstrates the possibility of the proposed approach for increasing the efficiency and the performance of electronic power converters in renewable generation.
{"title":"Power electronic converters in the unbalance compensation method for renewable energy-powered active distribution systems: AOA-RERNN approach","authors":"R. Banupriya, R. Nagarajan, S. Muthubalaji","doi":"10.1007/s00500-024-09853-2","DOIUrl":"https://doi.org/10.1007/s00500-024-09853-2","url":null,"abstract":"<p>Unbalanced loads and high neutral currents on low voltage networks which frequently use three-phase, four wire systems with no larger conductors must need to be addressed. To overcome the loads and currents in low-voltage networks, an hybrid method is proposed in this manuscript for improving the networks of low-voltage using three-phase four-wire systems. The AOA-RERNN technique is the integration of the Archimedean-Optimization-Algorithm (AOA) and Recalling-Enhanced-Recurrent-Neural-Network (RERNN) technique to mitigate the issues, like neutral voltage offset, and harmonics, and neutral-to-ground voltage raise. At the point-of-common-coupling (PCC), the integration of Archimedean-optimization-algorithm and Recalling-enhanced-recurrent-neural-network approach is used to overcome the above mentioned issues. This strategy involves optimizing converter parameters with AOA and addressing system imbalances with RERNN, including mid-high line current, phase disparities, and neutral line compensation. Also, implementing control-based compensation reduces neutral current without requiring large neutral conductors. The proposed model is done in MATLAB. By this, the proposed approach achieves an impressive efficiency of 97.54%. But, the existing methods, like Artificial Transgender Long corn Algorithm (ATLA), Combined Adaptive Grasshopper Optimization Algorithm and Artificial Neural Network (AGONN), And Proportional Integral (PI) attain the efficiency of 80.23%, 77.26%, and 82.13%, respectively. The outcome of the simulation indicates that the proposed technique provides better findings than the present methods. Finally, this study demonstrates the possibility of the proposed approach for increasing the efficiency and the performance of electronic power converters in renewable generation.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"46 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141969125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s00500-024-09873-y
Hossam Magdy Balaha, Eman M. El-Gendy, Mahmoud M. Saafan
Diabetes mellitus is one of the most common diseases affecting patients of different ages. Diabetes can be controlled if diagnosed as early as possible. One of the serious complications of diabetes affecting the retina is diabetic retinopathy. If not diagnosed early, it can lead to blindness. Our purpose is to propose a novel framework, named (D_MD_RDF), for early and accurate diagnosis of diabetes and diabetic retinopathy. The framework consists of two phases, one for diabetes mellitus detection (DMD) and the other for diabetic retinopathy detection (DRD). The novelty of DMD phase is concerned in two contributions. Firstly, a novel feature selection approach called Advanced Aquila Optimizer Feature Selection ((A^2OFS)) is introduced to choose the most promising features for diagnosing diabetes. This approach extracts the required features from the results of laboratory tests while ignoring the useless features. Secondly, a novel classification approach (CA) using five modified machine learning (ML) algorithms is used. This modification of the ML algorithms is proposed to automatically select the parameters of these algorithms using Grid Search (GS) algorithm. The novelty of DRD phase lies in the modification of 7 CNNs using Aquila Optimizer for the classification of diabetic retinopathy. The reported results concerning the DMD datasets shows that AO reports best performance metrics in the feature selection process with the help of modified ML classifiers. The best achieved accuracy is 98.65% with the GS-ERTC model and max-absolute scaling on the “Early Stage Diabetes Risk Prediction Dataset” dataset. Also, from the reported results concerning the DRD datasets, the AOMobileNet is considered a suitable model for this problem as it outperforms the other modified CNN models with accuracy of 95.80% on the “The SUSTech-SYSU dataset” dataset.
{"title":"$$D_MD_RDF$$ : diabetes mellitus and retinopathy detection framework using artificial intelligence and feature selection","authors":"Hossam Magdy Balaha, Eman M. El-Gendy, Mahmoud M. Saafan","doi":"10.1007/s00500-024-09873-y","DOIUrl":"https://doi.org/10.1007/s00500-024-09873-y","url":null,"abstract":"<p>Diabetes mellitus is one of the most common diseases affecting patients of different ages. Diabetes can be controlled if diagnosed as early as possible. One of the serious complications of diabetes affecting the retina is diabetic retinopathy. If not diagnosed early, it can lead to blindness. Our purpose is to propose a novel framework, named <span>(D_MD_RDF)</span>, for early and accurate diagnosis of diabetes and diabetic retinopathy. The framework consists of two phases, one for diabetes mellitus detection (DMD) and the other for diabetic retinopathy detection (DRD). The novelty of DMD phase is concerned in two contributions. Firstly, a novel feature selection approach called Advanced Aquila Optimizer Feature Selection (<span>(A^2OFS)</span>) is introduced to choose the most promising features for diagnosing diabetes. This approach extracts the required features from the results of laboratory tests while ignoring the useless features. Secondly, a novel classification approach (CA) using five modified machine learning (ML) algorithms is used. This modification of the ML algorithms is proposed to automatically select the parameters of these algorithms using Grid Search (GS) algorithm. The novelty of DRD phase lies in the modification of 7 CNNs using Aquila Optimizer for the classification of diabetic retinopathy. The reported results concerning the DMD datasets shows that AO reports best performance metrics in the feature selection process with the help of modified ML classifiers. The best achieved accuracy is 98.65% with the GS-ERTC model and max-absolute scaling on the “Early Stage Diabetes Risk Prediction Dataset” dataset. Also, from the reported results concerning the DRD datasets, the AOMobileNet is considered a suitable model for this problem as it outperforms the other modified CNN models with accuracy of 95.80% on the “The SUSTech-SYSU dataset” dataset.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"29 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s00500-024-09946-y
S. Jain, Dharavath Ramesh, E. Damodar Reddy, Santosha Rathod, Gabrijel Ondrasek
Understanding plant traits is essential for decoding the behavior of various genomes and their reactions to environmental factors, paving the way for efficient and sustainable agricultural practices. Image-based plant phenotyping has become increasingly popular in modern agricultural research, effectively analyzing large-scale plant data. This study introduces a new high-throughput plant phenotyping system designed to examine plant growth patterns using segmentation analysis. This system consists of two main components: (i) A plant detector module that identifies individual plants within a high-throughput imaging setup, utilizing the Tiny-YOLOv4 (You Only Look Once) architecture. (ii) A segmentation module that accurately outlines the identified plants using the Chan-Vese segmentation algorithm. We tested our approach using top-view RGB tray images of the ‘Arabidopsis Thaliana’ plant species. The plant detector module achieved an impressive localization accuracy of 96.4% and an average Intersection over Union (IoU) of 77.42%. Additionally, the segmentation module demonstrated strong performance with dice and Jaccard scores of 0.95 and 0.91, respectively. These results highlight the system’s capability to define plant boundaries accurately. Our findings affirm the effectiveness of our high-throughput plant phenotyping system and underscore the importance of employing advanced computer vision techniques for precise plant trait analysis. These technological advancements promise to boost agricultural productivity, advance genetic research, and promote environmental sustainability in plant biology and agriculture.
{"title":"A fast high throughput plant phenotyping system using YOLO and Chan-Vese segmentation","authors":"S. Jain, Dharavath Ramesh, E. Damodar Reddy, Santosha Rathod, Gabrijel Ondrasek","doi":"10.1007/s00500-024-09946-y","DOIUrl":"https://doi.org/10.1007/s00500-024-09946-y","url":null,"abstract":"<p>Understanding plant traits is essential for decoding the behavior of various genomes and their reactions to environmental factors, paving the way for efficient and sustainable agricultural practices. Image-based plant phenotyping has become increasingly popular in modern agricultural research, effectively analyzing large-scale plant data. This study introduces a new high-throughput plant phenotyping system designed to examine plant growth patterns using segmentation analysis. This system consists of two main components: (i) A plant detector module that identifies individual plants within a high-throughput imaging setup, utilizing the Tiny-YOLOv4 (You Only Look Once) architecture. (ii) A segmentation module that accurately outlines the identified plants using the Chan-Vese segmentation algorithm. We tested our approach using top-view RGB tray images of the ‘Arabidopsis Thaliana’ plant species. The plant detector module achieved an impressive localization accuracy of 96.4% and an average Intersection over Union (IoU) of 77.42%. Additionally, the segmentation module demonstrated strong performance with dice and Jaccard scores of 0.95 and 0.91, respectively. These results highlight the system’s capability to define plant boundaries accurately. Our findings affirm the effectiveness of our high-throughput plant phenotyping system and underscore the importance of employing advanced computer vision techniques for precise plant trait analysis. These technological advancements promise to boost agricultural productivity, advance genetic research, and promote environmental sustainability in plant biology and agriculture.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"16 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}