Pub Date : 2024-07-26DOI: 10.1007/s00500-024-09875-w
Zengtai Gong, Jing Zhang
The fuzzy concept lattice is one of the effective tools for data mining, and granular reduction is one of its significant research contents. However, little research has been done on granular reduction at different granularities in formal fuzzy contexts (FFCs). Furthermore, the complexity of the composition of the fuzzy concept lattice limits the interest in its research. Therefore, how to simplify the concept lattice structure and how to construct granular reduction methods with granularity have become urgent issues that need to be investigated. To this end, firstly, the concept of an object granule with granularity is defined. Secondly, two reduction algorithms, one based on Boolean reasoning and the other on a graph-theoretic heuristic, are formulated while keeping the structure of this object granule unchanged. Further, to simplify the structure of the fuzzy concept lattice, a partial order relation with parameters is proposed. Finally, the feasibility and effectiveness of our proposed reduction approaches are verified by data experiments.
{"title":"$$delta $$ -granular reduction in formal fuzzy contexts: Boolean reasoning, graph represent and their algorithms","authors":"Zengtai Gong, Jing Zhang","doi":"10.1007/s00500-024-09875-w","DOIUrl":"https://doi.org/10.1007/s00500-024-09875-w","url":null,"abstract":"<p>The fuzzy concept lattice is one of the effective tools for data mining, and granular reduction is one of its significant research contents. However, little research has been done on granular reduction at different granularities in formal fuzzy contexts (FFCs). Furthermore, the complexity of the composition of the fuzzy concept lattice limits the interest in its research. Therefore, how to simplify the concept lattice structure and how to construct granular reduction methods with granularity have become urgent issues that need to be investigated. To this end, firstly, the concept of an object granule with granularity is defined. Secondly, two reduction algorithms, one based on Boolean reasoning and the other on a graph-theoretic heuristic, are formulated while keeping the structure of this object granule unchanged. Further, to simplify the structure of the fuzzy concept lattice, a partial order relation with parameters is proposed. Finally, the feasibility and effectiveness of our proposed reduction approaches are verified by data experiments.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"27 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study focuses on unspecified dynamic seru scheduling problems with resource constraints (UDSS-R) in seru production system (SPS). A mixed integer linear programming model is formulated to minimize the makespan, which is solved sequentially from both allocation and scheduling perspectives by a strip-packing constructive algorithm (SPCA) with deep reinforcement learning (DRL). The training samples are trained by the DRL model, and the reward values obtained are calculated by SPCA to train the network so that the agent can find a better solution. The output of DRL is the scheduling order of jobs in serus, while the solution of UDSS-R is solved by SPCA. Finally, a set of test instances are generated to conduct computational experiments with different instance scales for the DRL-SPCA, and the results confirm the effectiveness of proposed DRL-SPCA in solving UDSS-R with more outstanding performance in terms of solution quality and efficiency, across three data scales (10 serus × 100 jobs, 20 serus × 250 jobs, and 30 serus × 400 jobs), compared with GA and SAA, the Avg. RPD of DRL-SPCA decreased by 9.93% and 7.56%, 13.36% and 10.72%, and 9.09% and 7.08%, respectively. In addition, the Avg. CPU time was reduced by 29.53% and 27.93%, 57.48% and 57.04%, and 61.73% and 61.76%, respectively.
{"title":"A strip-packing constructive algorithm with deep reinforcement learning for dynamic resource-constrained seru scheduling problems","authors":"Yiran Xiang, Zhe Zhang, Xue Gong, Xiaoling Song, Yong Yin","doi":"10.1007/s00500-024-09815-8","DOIUrl":"https://doi.org/10.1007/s00500-024-09815-8","url":null,"abstract":"<p>This study focuses on unspecified dynamic <i>seru</i> scheduling problems with resource constraints (UDSS-R) in <i>seru</i> production system (SPS). A mixed integer linear programming model is formulated to minimize the <i>makespan</i>, which is solved sequentially from both allocation and scheduling perspectives by a strip-packing constructive algorithm (SPCA) with deep reinforcement learning (DRL). The training samples are trained by the DRL model, and the reward values obtained are calculated by SPCA to train the network so that the agent can find a better solution. The output of DRL is the scheduling order of jobs in <i>serus</i>, while the solution of UDSS-R is solved by SPCA. Finally, a set of test instances are generated to conduct computational experiments with different instance scales for the DRL-SPCA, and the results confirm the effectiveness of proposed DRL-SPCA in solving UDSS-R with more outstanding performance in terms of solution quality and efficiency, across three data scales (10 <i>serus</i> × 100 jobs, 20 <i>serus</i> × 250 jobs, and 30 <i>serus</i> × 400 jobs), compared with GA and SAA, the <i>Avg. RPD</i> of DRL-SPCA decreased by 9.93% and 7.56%, 13.36% and 10.72%, and 9.09% and 7.08%, respectively. In addition, the <i>Avg. CPU time</i> was reduced by 29.53% and 27.93%, 57.48% and 57.04%, and 61.73% and 61.76%, respectively.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"47 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study analyzes the conditional gradient method for constrained multiobjective optimization problems, also known as the Frank–Wolfe method. We assume that the objectives are continuously differentiable, and the constraint set is convex and compact. We employ an average-type nonmonotone line search, which takes the average of the recent objective function values. The asymptotic convergence properties without convexity assumptions on the objective functions are established. We prove that every limit point of the sequence of iterates that is obtained by the proposed method is a Pareto critical point. An iteration-complexity bound is provided regardless of the convexity assumption on the objective functions. The effectiveness of the suggested approach is demonstrated by applying it to several benchmark test problems. In addition, the efficiency of the proposed algorithm in generating approximations of the entire Pareto front is compared to the existing Hager–Zhang conjugate gradient method, the steepest descent method, the monotone conditional gradient method, and a nonmonotone conditional gradient method. In finding empirical comparison, we utilize two commonly used performance matrices—inverted generational distance and hypervolume indicators.
{"title":"A nonmonotone conditional gradient method for multiobjective optimization problems","authors":"Ashutosh Upadhayay, Debdas Ghosh, Jauny, Jen-Chih Yao, Xiaopeng Zhao","doi":"10.1007/s00500-024-09806-9","DOIUrl":"https://doi.org/10.1007/s00500-024-09806-9","url":null,"abstract":"<p>This study analyzes the conditional gradient method for constrained multiobjective optimization problems, also known as the Frank–Wolfe method. We assume that the objectives are continuously differentiable, and the constraint set is convex and compact. We employ an average-type nonmonotone line search, which takes the average of the recent objective function values. The asymptotic convergence properties without convexity assumptions on the objective functions are established. We prove that every limit point of the sequence of iterates that is obtained by the proposed method is a Pareto critical point. An iteration-complexity bound is provided regardless of the convexity assumption on the objective functions. The effectiveness of the suggested approach is demonstrated by applying it to several benchmark test problems. In addition, the efficiency of the proposed algorithm in generating approximations of the entire Pareto front is compared to the existing Hager–Zhang conjugate gradient method, the steepest descent method, the monotone conditional gradient method, and a nonmonotone conditional gradient method. In finding empirical comparison, we utilize two commonly used performance matrices—inverted generational distance and hypervolume indicators.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"43 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s00500-024-09901-x
Saurabh Agarwal, K. V. Arya, Yogesh Kumar Meena
Chest X-ray imaging is a critical diagnostic tool for identifying pulmonary diseases. However, manual interpretation of these images is time-consuming and error-prone. Automated systems utilizing convolutional neural networks (CNNs) have shown promise in improving the accuracy and efficiency of chest X-ray image classification. While previous work has mainly focused on using feature maps from the final convolution layer, there is a need to explore the benefits of leveraging additional layers for improved disease classification. Extracting robust features from limited medical image datasets remains a critical challenge. In this paper, we propose a novel deep learning-based multilayer multimodal fusion model that emphasizes extracting features from different layers and fusing them. Our disease detection model considers the discriminatory information captured by each layer. Furthermore, we propose the fusion of different-sized feature maps (FDSFM) module to effectively merge feature maps from diverse layers. The proposed model achieves a significantly higher accuracy of 97.21% and 99.60% for both three-class and two-class classifications, respectively. The proposed multilayer multimodal fusion model, along with the FDSFM module, holds promise for accurate disease classification and can also be extended to other disease classifications in chest X-ray images.
胸部 X 光成像是识别肺部疾病的重要诊断工具。然而,人工解读这些图像既耗时又容易出错。利用卷积神经网络(CNN)的自动化系统有望提高胸部 X 光图像分类的准确性和效率。虽然以前的工作主要集中在使用最后卷积层的特征图,但仍有必要探索利用更多层来改进疾病分类的好处。从有限的医学图像数据集中提取稳健的特征仍然是一项严峻的挑战。在本文中,我们提出了一种新颖的基于深度学习的多层多模态融合模型,强调从不同层中提取特征并将其融合。我们的疾病检测模型考虑了各层捕获的鉴别信息。此外,我们还提出了不同大小特征图的融合(FDSFM)模块,以有效融合来自不同层的特征图。所提出的模型在三类和两类分类中分别达到了 97.21% 和 99.60% 的较高准确率。所提出的多层多模态融合模型和 FDSFM 模块有望实现准确的疾病分类,并可扩展到胸部 X 光图像中的其他疾病分类。
{"title":"MultiFusionNet: multilayer multimodal fusion of deep neural networks for chest X-ray image classification","authors":"Saurabh Agarwal, K. V. Arya, Yogesh Kumar Meena","doi":"10.1007/s00500-024-09901-x","DOIUrl":"https://doi.org/10.1007/s00500-024-09901-x","url":null,"abstract":"<p>Chest X-ray imaging is a critical diagnostic tool for identifying pulmonary diseases. However, manual interpretation of these images is time-consuming and error-prone. Automated systems utilizing convolutional neural networks (CNNs) have shown promise in improving the accuracy and efficiency of chest X-ray image classification. While previous work has mainly focused on using feature maps from the final convolution layer, there is a need to explore the benefits of leveraging additional layers for improved disease classification. Extracting robust features from limited medical image datasets remains a critical challenge. In this paper, we propose a novel deep learning-based multilayer multimodal fusion model that emphasizes extracting features from different layers and fusing them. Our disease detection model considers the discriminatory information captured by each layer. Furthermore, we propose the fusion of different-sized feature maps (FDSFM) module to effectively merge feature maps from diverse layers. The proposed model achieves a significantly higher accuracy of 97.21% and 99.60% for both three-class and two-class classifications, respectively. The proposed multilayer multimodal fusion model, along with the FDSFM module, holds promise for accurate disease classification and can also be extended to other disease classifications in chest X-ray images.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"27 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s00500-024-09880-z
Ahsan Mahboob, M. Al-Tahan, Ghulam Muhiuddin
In this article, the concept of fuzzy (m, n)-quasi-ideals in ordered semigroups is developed and discussed in various ways. In addition, we present the concepts of fuzzy (m, 0)-ideals and fuzzy (0, n)-ideals in ordered semigroups and investigate some of their associated properties. Furthermore, the (m, n)-regular ordered semigroups are studied in terms of fuzzy (m, n)-quasi-ideals, fuzzy (m, 0)-ideals, and fuzzy (0, n)-ideals.
{"title":"Characterizations of ordered semigroups in terms of fuzzy (m, n)-substructures","authors":"Ahsan Mahboob, M. Al-Tahan, Ghulam Muhiuddin","doi":"10.1007/s00500-024-09880-z","DOIUrl":"https://doi.org/10.1007/s00500-024-09880-z","url":null,"abstract":"<p>In this article, the concept of fuzzy (<i>m</i>, <i>n</i>)-quasi-ideals in ordered semigroups is developed and discussed in various ways. In addition, we present the concepts of fuzzy (<i>m</i>, 0)-ideals and fuzzy (0, <i>n</i>)-ideals in ordered semigroups and investigate some of their associated properties. Furthermore, the (<i>m</i>, <i>n</i>)-regular ordered semigroups are studied in terms of fuzzy (<i>m</i>, <i>n</i>)-quasi-ideals, fuzzy (<i>m</i>, 0)-ideals, and fuzzy (0, <i>n</i>)-ideals.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"21 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s00500-024-09896-5
Vinícius Veloso de Melo, Alexandre Moreira Nascimento, Giovanni Iacca
Several constrained optimization problems have been adequately solved over the years thanks to the advances in the area of metaheuristics. Nevertheless, the question as to which search logic performs better on constrained optimization often arises. In this paper, we present Dual Search Optimization (DSO), a co-evolutionary algorithm that includes an adaptive penalty function to handle constrained problems. Compared to other self-adaptive metaheuristics, one of the main advantages of DSO is that it is able auto-construct its own perturbation logics, i.e., the ways solutions are modified to create new ones during the optimization process. This is accomplished by co-evolving the solutions (encoded as vectors of integer/real values) and perturbation strategies (encoded as Genetic Programming trees), in order to adapt the search to the problem. In addition to that, the adaptive penalty function allows the algorithm to handle constraints very effectively, yet with a minor additional algorithmic overhead. We compare DSO with several algorithms from the state-of-the-art on two sets of problems, namely: (1) seven well-known constrained engineering design problems and (2) the CEC 2017 benchmark for constrained optimization. Our results show that DSO can achieve state-of-the-art performances, being capable to automatically adjust its behavior to the problem at hand.
{"title":"A co-evolutionary algorithm with adaptive penalty function for constrained optimization","authors":"Vinícius Veloso de Melo, Alexandre Moreira Nascimento, Giovanni Iacca","doi":"10.1007/s00500-024-09896-5","DOIUrl":"https://doi.org/10.1007/s00500-024-09896-5","url":null,"abstract":"<p>Several constrained optimization problems have been adequately solved over the years thanks to the advances in the area of metaheuristics. Nevertheless, the question as to which search logic performs better on constrained optimization often arises. In this paper, we present Dual Search Optimization (DSO), a co-evolutionary algorithm that includes an adaptive penalty function to handle constrained problems. Compared to other self-adaptive metaheuristics, one of the main advantages of DSO is that it is able auto-construct its own perturbation logics, i.e., the ways solutions are modified to create new ones during the optimization process. This is accomplished by co-evolving the solutions (encoded as vectors of integer/real values) and perturbation strategies (encoded as Genetic Programming trees), in order to adapt the search to the problem. In addition to that, the adaptive penalty function allows the algorithm to handle constraints very effectively, yet with a minor additional algorithmic overhead. We compare DSO with several algorithms from the state-of-the-art on two sets of problems, namely: (1) seven well-known constrained engineering design problems and (2) the CEC 2017 benchmark for constrained optimization. Our results show that DSO can achieve state-of-the-art performances, being capable to automatically adjust its behavior to the problem at hand.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"86 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s00500-024-09937-z
Dongxin Li, Jiayue Xin
Under the wave of the digital era, the retail industry is facing unprecedented fierce competition and a rapidly changing market environment. In this context, developing smart and efficient pricing strategies has become a top priority in the industry. Faced with this challenge, traditional pricing methods are inadequate due to their slow response, insufficient adaptability to instant changes in the market, and over-reliance on historical data and human experience. In response to this urgent need, this study aims to design an intelligent pricing model rooted in deep learning to enhance the vitality and competitiveness of the retail industry. The emerging solution adopted in this article combines Temporal Fusion Transformer (TFT), Ensemble of Simplified RNNs (ES-RNN), and dynamic attention mechanisms, aiming to accurately capture and analyze complex time series data through these advanced technologies. TFT processes multivariate and multi-level data, ES-RNN technology integrates multiple simple versions of recurrent neural networks to enhance predictive power, and the dynamic attention mechanism allows the model to dynamically weight the importance of different points in the time series, thereby improving the effectiveness of feature extraction. Test experimental results on four different data sets show that our models all show excellent performance, and the accuracy of predicted product sales far exceeds traditional models. In addition, with its ability to dynamically adjust pricing, the model demonstrates excellent stability and adaptability amid market fluctuations. This research not only promotes the intelligent transformation of retail pricing strategies, but also provides a more strategic tool for enterprises to compete for market share.
{"title":"Deep learning-driven intelligent pricing model in retail: from sales forecasting to dynamic price optimization","authors":"Dongxin Li, Jiayue Xin","doi":"10.1007/s00500-024-09937-z","DOIUrl":"https://doi.org/10.1007/s00500-024-09937-z","url":null,"abstract":"<p>Under the wave of the digital era, the retail industry is facing unprecedented fierce competition and a rapidly changing market environment. In this context, developing smart and efficient pricing strategies has become a top priority in the industry. Faced with this challenge, traditional pricing methods are inadequate due to their slow response, insufficient adaptability to instant changes in the market, and over-reliance on historical data and human experience. In response to this urgent need, this study aims to design an intelligent pricing model rooted in deep learning to enhance the vitality and competitiveness of the retail industry. The emerging solution adopted in this article combines Temporal Fusion Transformer (TFT), Ensemble of Simplified RNNs (ES-RNN), and dynamic attention mechanisms, aiming to accurately capture and analyze complex time series data through these advanced technologies. TFT processes multivariate and multi-level data, ES-RNN technology integrates multiple simple versions of recurrent neural networks to enhance predictive power, and the dynamic attention mechanism allows the model to dynamically weight the importance of different points in the time series, thereby improving the effectiveness of feature extraction. Test experimental results on four different data sets show that our models all show excellent performance, and the accuracy of predicted product sales far exceeds traditional models. In addition, with its ability to dynamically adjust pricing, the model demonstrates excellent stability and adaptability amid market fluctuations. This research not only promotes the intelligent transformation of retail pricing strategies, but also provides a more strategic tool for enterprises to compete for market share.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"49 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s00500-024-09800-1
Bo Lei, Luhang He, Zhen Yang
Renyi entropy-based thresholding is a popular image segmentation method. In this work, to improve the performance of the Renyi entropy thresholding method, an efficient adaptive multilevel Renyi entropy thresholding method based on the energy curve with dynamic programming (DP + ARET) is presented. First, the histogram is substituted by the energy curve in the Renyi entropy thresholding to take advantage of the spatial context information of pixels. Second, an adaptive entropy index selection strategy is proposed based on the image histogram. Finally, to decrease the computation complexity of the multilevel Renyi entropy thresholding, an efficient solution is calculated by the dynamic programming technique. The proposed DP + ARET method can obtain the global optimal thresholds with the time complexity linear in the number of the thresholds. The comparative experiments between the proposed method with the histogram-based method verified the effectiveness of the energy curve. The segmentation results on the COVID-19 Computed Tomography (CT) images with the same objective function by the proposed DP + ARET and swarm intelligence optimization methods testify that the DP + ARET can quickly obtain the global optimal thresholds. Finally, the performance of the DP + ARET method is compared with several image segmentation methods quantitatively and qualitatively, the average segmented accuracy (SA) is improved by 7% than the comparative methods. The proposed DP + ARET method can be used to fast segment the images with no other prior knowledge.
{"title":"An efficient adaptive multilevel Renyi entropy thresholding method based on the energy curve with dynamic programming","authors":"Bo Lei, Luhang He, Zhen Yang","doi":"10.1007/s00500-024-09800-1","DOIUrl":"https://doi.org/10.1007/s00500-024-09800-1","url":null,"abstract":"<p>Renyi entropy-based thresholding is a popular image segmentation method. In this work, to improve the performance of the Renyi entropy thresholding method, an efficient adaptive multilevel Renyi entropy thresholding method based on the energy curve with dynamic programming (DP + ARET) is presented. First, the histogram is substituted by the energy curve in the Renyi entropy thresholding to take advantage of the spatial context information of pixels. Second, an adaptive entropy index selection strategy is proposed based on the image histogram. Finally, to decrease the computation complexity of the multilevel Renyi entropy thresholding, an efficient solution is calculated by the dynamic programming technique. The proposed DP + ARET method can obtain the global optimal thresholds with the time complexity linear in the number of the thresholds. The comparative experiments between the proposed method with the histogram-based method verified the effectiveness of the energy curve. The segmentation results on the COVID-19 Computed Tomography (CT) images with the same objective function by the proposed DP + ARET and swarm intelligence optimization methods testify that the DP + ARET can quickly obtain the global optimal thresholds. Finally, the performance of the DP + ARET method is compared with several image segmentation methods quantitatively and qualitatively, the average segmented accuracy (SA) is improved by 7% than the comparative methods. The proposed DP + ARET method can be used to fast segment the images with no other prior knowledge.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"82 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s00500-024-09855-0
F. S. Mousavinejad, M. Fatehi Nia
The saccade is one of the eye movements that resulted in the creation of the saccadic model. This work is grounded in the basic principles of the saccadic system, which are burst neurons and a resettable integrator model. Considering the possibility of strengthening the saccadic model based on its fundamental model, we introduce a replacement function for use in the burster equation that explains the preservation of the on response’s form and also considers the off response. The new model is a two-dimensional map containing slow and fast variables with a new burster function, which solves the lack of differentiability of the primary function at the equilibrium point. By applying time series approaches and phase portraits, the mechanisms underlying the generation of spikes and spike bursts in the behavior of the new model are revealed. The present research’s other main focus is to determine the geometry of the slow manifold for the newly developed system. Specifically, we examine the dynamics around an equilibrium point and the geometry of a slow manifold by using Fenichel’s theorem. In addition, we use the center manifold theory to describe some dynamical characteristics of the center manifold that the slow manifold matches. Finally, this study aims to figure out the effects of geometric singular perturbations on this fast-slow burster equation, which finds dynamical behaviors such as being uniformly asymptotically stable and locally attractive.
{"title":"Slow manifold analysis of modified burst model in the saccadic system","authors":"F. S. Mousavinejad, M. Fatehi Nia","doi":"10.1007/s00500-024-09855-0","DOIUrl":"https://doi.org/10.1007/s00500-024-09855-0","url":null,"abstract":"<p>The saccade is one of the eye movements that resulted in the creation of the saccadic model. This work is grounded in the basic principles of the saccadic system, which are burst neurons and a resettable integrator model. Considering the possibility of strengthening the saccadic model based on its fundamental model, we introduce a replacement function for use in the burster equation that explains the preservation of the on response’s form and also considers the off response. The new model is a two-dimensional map containing slow and fast variables with a new burster function, which solves the lack of differentiability of the primary function at the equilibrium point. By applying time series approaches and phase portraits, the mechanisms underlying the generation of spikes and spike bursts in the behavior of the new model are revealed. The present research’s other main focus is to determine the geometry of the slow manifold for the newly developed system. Specifically, we examine the dynamics around an equilibrium point and the geometry of a slow manifold by using Fenichel’s theorem. In addition, we use the center manifold theory to describe some dynamical characteristics of the center manifold that the slow manifold matches. Finally, this study aims to figure out the effects of geometric singular perturbations on this fast-slow burster equation, which finds dynamical behaviors such as being uniformly asymptotically stable and locally attractive.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"63 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the method of using graph neural networks (GNN) to learn users’ social influence has been widely applied to social recommendation and has shown effectiveness, but several important challenges have not been well addressed: (i) Most work fails to consider the user interests (historical user-item interactions) when first building user-user social relationships, which can make it difficult to capture accurate user embedding and thus prevent the model from better exploring the users’ social influence; (ii) Most of the current methods do not build social neighbors (with the same item-user interaction) that belong to the item and do not aggregate information from the perspective of social neighbors, which makes it possible for the item to lose a lot of details when expressing the user’s interest factors. Therefore, to address the above challenges, we propose Exploring Implicit Influence for Social Recommendation Based on GNN (EIIGNN). First, we construct the initial user embedding with user-item interaction information and use the implicit modeling module in user modeling to explore the implicit influence of interest factors on users. In addition, EIIGNN models the social graph structure of item (an item-item graph) so that item can aggregate information from the perspective of their social neighbors, which helps the model learn a more accurate representation of the item. Finally, extensive experimental results on two real-world datasets clearly demonstrate the effectiveness of EIIGNN.
近年来,利用图神经网络(GNN)学习用户社交影响力的方法已被广泛应用于社交推荐,并显示出了良好的效果,但有几个重要的挑战并没有得到很好的解决:(i) 大多数工作在首次构建用户-用户社交关系时没有考虑用户兴趣(用户-物品的历史交互),这可能导致难以捕捉到准确的用户嵌入,从而使模型无法更好地探索用户的社交影响力;(ii) 目前的大多数方法没有构建属于物品的社交邻居(具有相同的物品-用户交互),也没有从社交邻居的角度进行信息聚合,这使得物品在表达用户兴趣因素时可能会丢失很多细节。因此,为了解决上述难题,我们提出了基于 GNN 的 "探索社交推荐的内隐影响"(Exploring Implicit Influence for Social Recommendation Based on GNN,EIIGNN)。首先,我们利用用户-物品交互信息构建初始用户嵌入,并利用用户建模中的隐式建模模块探索兴趣因素对用户的隐式影响。此外,EIIGNN 对项目的社交图结构(项目-项目图)进行建模,使项目可以从其社交邻居的角度聚合信息,从而帮助模型学习到更准确的项目表征。最后,在两个真实世界数据集上的大量实验结果清楚地证明了 EIIGNN 的有效性。
{"title":"Exploring implicit influence for social recommendation based on GNN","authors":"Zhewei Liu, Peilin Yang, Qingbo Hao, Wenguang Zheng, Yingyuan Xiao","doi":"10.1007/s00500-024-09898-3","DOIUrl":"https://doi.org/10.1007/s00500-024-09898-3","url":null,"abstract":"<p>In recent years, the method of using graph neural networks (GNN) to learn users’ social influence has been widely applied to social recommendation and has shown effectiveness, but several important challenges have not been well addressed: (i) Most work fails to consider the user interests (historical user-item interactions) when first building user-user social relationships, which can make it difficult to capture accurate user embedding and thus prevent the model from better exploring the users’ social influence; (ii) Most of the current methods do not build social neighbors (with the same item-user interaction) that belong to the item and do not aggregate information from the perspective of social neighbors, which makes it possible for the item to lose a lot of details when expressing the user’s interest factors. Therefore, to address the above challenges, we propose Exploring Implicit Influence for Social Recommendation Based on GNN (EIIGNN). First, we construct the initial user embedding with user-item interaction information and use the implicit modeling module in user modeling to explore the implicit influence of interest factors on users. In addition, EIIGNN models the social graph structure of item (an item-item graph) so that item can aggregate information from the perspective of their social neighbors, which helps the model learn a more accurate representation of the item. Finally, extensive experimental results on two real-world datasets clearly demonstrate the effectiveness of EIIGNN.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"10 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}