Pub Date : 2024-11-08DOI: 10.1016/j.eswa.2024.125676
Qiuxia Yang , Zhengpeng Zhao , Yuanyuan Pu , Shuyu Pan , Jinjing Gu , Dan Xu
Substantial training data is necessary to train an effective generative adversarial network(GANs), without which the discriminator is easily overfitting, causing the sub-optimal models. To solve these problems, this work explores the Frequency-domain Negative Sample Mining in Contrastive learning (FNContra) to improve data efficiency, which requires the discriminator to differentiate the definite relationships between the negative samples and real images. Concretely, this work first constructs multiple-level negative samples in the frequency domain and then proposes Discriminated Wavelet-instance Contrastive Learning (DWCL) and Generated Wavelet-prototype Contrastive Learning (GWCL). The former helps the discriminator learn the fine-grained texture features, and the latter impels the generated feature distribution to be close to real. Considering the learning difficulty of multi-level negative samples, this work proposes a dynamic weight driven by self-information, which ensures the resultant force is positive from the multi-level negative samples during the training. Finally, this work performs experiments on eleven datasets with different domains and resolutions. The quantitative and qualitative results demonstrate the superiority and effectiveness of the FNContra trained on limited data, and it indicates that FNContra can synthesize high-quality images. Notably, FNContra achieves the best FID scores on 10 out of 11 datasets, with improvements of 17.90 and 29.24 on Moongate and Shells, respectively, compared to the baseline. The code can be found at https://github.com/YQX1996/FNContra.
{"title":"FNContra: Frequency-domain Negative Sample Mining in Contrastive Learning for limited-data image generation","authors":"Qiuxia Yang , Zhengpeng Zhao , Yuanyuan Pu , Shuyu Pan , Jinjing Gu , Dan Xu","doi":"10.1016/j.eswa.2024.125676","DOIUrl":"10.1016/j.eswa.2024.125676","url":null,"abstract":"<div><div>Substantial training data is necessary to train an effective generative adversarial network(GANs), without which the discriminator is easily overfitting, causing the sub-optimal models. To solve these problems, this work explores the Frequency-domain Negative Sample Mining in Contrastive learning (FNContra) to improve data efficiency, which requires the discriminator to differentiate the definite relationships between the negative samples and real images. Concretely, this work first constructs multiple-level negative samples in the frequency domain and then proposes Discriminated Wavelet-instance Contrastive Learning (DWCL) and Generated Wavelet-prototype Contrastive Learning (GWCL). The former helps the discriminator learn the fine-grained texture features, and the latter impels the generated feature distribution to be close to real. Considering the learning difficulty of multi-level negative samples, this work proposes a dynamic weight driven by self-information, which ensures the resultant force is positive from the multi-level negative samples during the training. Finally, this work performs experiments on eleven datasets with different domains and resolutions. The quantitative and qualitative results demonstrate the superiority and effectiveness of the FNContra trained on limited data, and it indicates that FNContra can synthesize high-quality images. Notably, FNContra achieves the best FID scores on 10 out of 11 datasets, with improvements of 17.90 and 29.24 on Moongate and Shells, respectively, compared to the baseline. The code can be found at <span><span>https://github.com/YQX1996/FNContra</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125676"},"PeriodicalIF":7.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-08DOI: 10.1016/j.eswa.2024.125712
Miguel Ortiz-Barrios , Natalia Jaramillo-Rueda , Andrea Espeleta-Aris , Berk Kucukaltan , Llanos Cuenca
Society is on constant alert due to the increasing frequency and severity of Seasonal Respiratory Diseases (SRDs), posing significant challenges from both a humanitarian and public health perspective. The recent COVID-19 pandemic has tested the capacity of clinical laboratories to address seasonal infections, epidemic outbreaks, and critical emergencies. This scenario has led to operational burdens, primarily from resource limitations, a lack of proactive planning, and the low adaptation to unforeseen circumstances. Coupling different data-driven approaches considering multi-criteria weighting, interdependence assessment, and outranking are critical for devising effective interventions upgrading the operability of clinical labs during SRDs. Nonetheless, a deep literature review revealed there are no studies using these hybridized approaches when addressing this problem. Consequently, this article proposes the application of an innovative hybrid Multicriteria Decision-Making (MCDM) methodology that integrates the Intuitionistic Fuzzy Analytic Hierarchy Process (IF-AHP), Intuitionistic Fuzzy Decision Making Trial and Evaluation Laboratory (IF-DEMATEL), and Combined Compromise Solution (CoCoSo) to assess the disaster preparedness of clinical laboratories during SRDs. Initially, we applied IF-AHP to assign the relative weights to criteria and sub-criteria, considering the inherent hesitation and uncertainty in decision-making. Subsequently, IF-DEMATEL was utilized to analyze the interrelationships between criteria, providing insights into the interrelations among clinical lab disaster management drivers. Finally, the CoCoSo method was applied to estimate each lab’s Preparedness Index (PI) and detect response gaps when coping with SRDs. The suggested methodology was validated across nine clinical laboratories in Colombia during the most recent respiratory pandemic. This study contributes to the healthcare sector authorities by identifying key criteria and sub-criteria affecting the response of clinical labs, the elicitation of main response drivers in clinical labs when facing SRDs, and the calculation of a multidimensional indicator representing the preparedness of the labs. This work also enriches the literature by applying the IF-AHP, IF-DEMATEL, and CoCoSo approach to a challenging case study requiring a multi-method data-driven application. Furthermore, it suggests future directions to improve the proposed framework in other related contexts.
{"title":"Integrated fuzzy decision-making methodology with intuitionistic fuzzy numbers: An application for disaster preparedness in clinical laboratories","authors":"Miguel Ortiz-Barrios , Natalia Jaramillo-Rueda , Andrea Espeleta-Aris , Berk Kucukaltan , Llanos Cuenca","doi":"10.1016/j.eswa.2024.125712","DOIUrl":"10.1016/j.eswa.2024.125712","url":null,"abstract":"<div><div>Society is on constant alert due to the increasing frequency and severity of Seasonal Respiratory Diseases (SRDs), posing significant challenges from both a humanitarian and public health perspective. The recent COVID-19 pandemic has tested the capacity of clinical laboratories to address seasonal infections, epidemic outbreaks, and critical emergencies. This scenario has led to operational burdens, primarily from resource limitations, a lack of proactive planning, and the low adaptation to unforeseen circumstances. Coupling different data-driven approaches considering multi-criteria weighting, interdependence assessment, and outranking are critical for devising effective interventions upgrading the operability of clinical labs during SRDs. Nonetheless, a deep literature review revealed there are no studies using these hybridized approaches when addressing this problem. Consequently, this article proposes the application of an innovative hybrid Multicriteria Decision-Making (MCDM) methodology that integrates the Intuitionistic Fuzzy Analytic Hierarchy Process (IF-AHP), Intuitionistic Fuzzy Decision Making Trial and Evaluation Laboratory (IF-DEMATEL), and Combined Compromise Solution (CoCoSo) to assess the disaster preparedness of clinical laboratories during SRDs. Initially, we applied IF-AHP to assign the relative weights to criteria and sub-criteria, considering the inherent hesitation and uncertainty in decision-making. Subsequently, IF-DEMATEL was utilized to analyze the interrelationships between criteria, providing insights into the interrelations among clinical lab disaster management drivers. Finally, the CoCoSo method was applied to estimate each lab’s Preparedness Index (PI) and detect response gaps when coping with SRDs. The suggested methodology was validated across nine clinical laboratories in Colombia during the most recent respiratory pandemic. This study contributes to the healthcare sector authorities by identifying key criteria and sub-criteria affecting the response of clinical labs, the elicitation of main response drivers in clinical labs when facing SRDs, and the calculation of a multidimensional indicator representing the preparedness of the labs. This work also enriches the literature by applying the IF-AHP, IF-DEMATEL, and CoCoSo approach to a challenging case study requiring a multi-method data-driven application. Furthermore, it suggests future directions to improve the proposed framework in other related contexts.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125712"},"PeriodicalIF":7.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125664
Yves Wautelet , Xavier Rouget
Building circular economic systems is crucial to address ecological challenges like climate change. The twin transition suggests that, to maximize the impact of sustainable solutions, humans and (disruptive) technologies need to be effectively integrated. Methods to conceptually build such (eco)systems integrating these and assess their ecological impact before implementation are lacking. This paper addresses this gap by proposing the Circulise framework, a model-driven method designed to build circular systems and evaluate their environmental performance. The approach promotes design-thinking to create socio-technical ecosystems that can be evaluated at the light of their alignment with circular economy and/or sustainability principles and be used to generate operational software behavior. The Circulise framework was developed following the methodological guidance of design science research. It is applied in this paper to the case of Fanyatu, a non-profit organization focused on reforestation in the Congo Basin, showing its ability to create a circular ecosystem not only supporting the creation of regenerative CO-absorbing forests but also empowering and improving the quality of life of the local communities involved in the planting of trees. In Fanyatu’s case, Circulise’s strategic planning and technology integration lead to virtuous cycles, enabling a snowball effect in forest creation and the promotion of sustainable projects. The framework’s scalability and versatility allow it to be applied across various contexts, enabling the creation of customized circular ecosystems for sustainability tailored to specific human and technological needs.
{"title":"Circulise, a model-driven framework to build and align socio-technical systems for the twin transition: Fanyatu’s case of sustainability in reforestation","authors":"Yves Wautelet , Xavier Rouget","doi":"10.1016/j.eswa.2024.125664","DOIUrl":"10.1016/j.eswa.2024.125664","url":null,"abstract":"<div><div>Building circular economic systems is crucial to address ecological challenges like climate change. The twin transition suggests that, to maximize the impact of sustainable solutions, humans and (disruptive) technologies need to be effectively integrated. Methods to conceptually build such (eco)systems integrating these and assess their ecological impact before implementation are lacking. This paper addresses this gap by proposing the Circulise framework, a model-driven method designed to build circular systems and evaluate their environmental performance. The approach promotes design-thinking to create socio-technical ecosystems that can be evaluated at the light of their alignment with circular economy and/or sustainability principles and be used to generate operational software behavior. The Circulise framework was developed following the methodological guidance of design science research. It is applied in this paper to the case of Fanyatu, a non-profit organization focused on reforestation in the Congo Basin, showing its ability to create a circular ecosystem not only supporting the creation of regenerative CO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span>-absorbing forests but also empowering and improving the quality of life of the local communities involved in the planting of trees. In Fanyatu’s case, Circulise’s strategic planning and technology integration lead to virtuous cycles, enabling a snowball effect in forest creation and the promotion of sustainable projects. The framework’s scalability and versatility allow it to be applied across various contexts, enabling the creation of customized circular ecosystems for sustainability tailored to specific human and technological needs.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125664"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125703
Pingqing Liu , Junxin Shen , Peng Zhang , Baoquan Ning
Data trading platform (DTP) selection is a classic multi-attribute group decision-making (MAGDM) problem. As an extension of intuitionistic fuzzy sets (IFSs), single-valued neutrosophic credibility numbers (SvNCNs) can express both fuzzy evaluation information and the credibility level of the information, offering better expressiveness in describing fuzzy decision-making information. However, existing studies on aggregation operators and decision-making methods in the SvNCN environment are inadequate. Therefore, this paper proposes a MAGDM technique based on the fairly weighted variable extended power average (SvNCNFWVEPA) operators of SvNCNs and grey relational analysis (GRA)-Measurement of Alternatives and Ranking according to Compromise Solution (MARCOS) method. The main contributions are as follows: (1) we propose fairly operation rules for aggregating SvNCNs in an unbiased manner; (2) addressing the lack of SvNCN measurement research, we introduce preference distance and entropy measures for SvNCNs, then utilize the entropy measure to compute the objective weights of the attributes; (3) to effectively aggregate SvNCN information, inspired by the variable power geometric (VPG) operators and the extended power average (EPA) operators, we propose the variable extended power average (VEPA) operator for scientifically handling extreme values, extending it to SvNCNs with the SvNCNs fairly variable extended power average (SvNCNFVEPA) operators and their extended form; (4) we introduce the GRA method to calculate the degree of utility of alternatives relative to ideal and anti-ideal alternatives, forming the GRA-MARCOS method. This method can reflect both indicator differences and the similarity of alternatives, thereby rendering the evaluation results more scientific and objective; (5) to illustrate the application of the method to the MAGDM problem, we apply it to the example of DTP selection. Parameter sensitivity analysis and comparative analysis with other existing methods demonstrate that our proposed method is more scientific and flexible.
{"title":"Multi-attribute group decision-making method using single-valued neutrosophic credibility numbers with fairly variable extended power average operators and GRA-MARCOS","authors":"Pingqing Liu , Junxin Shen , Peng Zhang , Baoquan Ning","doi":"10.1016/j.eswa.2024.125703","DOIUrl":"10.1016/j.eswa.2024.125703","url":null,"abstract":"<div><div>Data trading platform (DTP) selection is a classic multi-attribute group decision-making (MAGDM) problem. As an extension of intuitionistic fuzzy sets (IFSs), single-valued neutrosophic credibility numbers (SvNCNs) can express both fuzzy evaluation information and the credibility level of the information, offering better expressiveness in describing fuzzy decision-making information. However, existing studies on aggregation operators and decision-making methods in the SvNCN environment are inadequate. Therefore, this paper proposes a MAGDM technique based on the fairly weighted variable extended power average (SvNCNFWVEPA) operators of SvNCNs and grey relational analysis (GRA)-Measurement of Alternatives and Ranking according to Compromise Solution (MARCOS) method. The main contributions are as follows: (1) we propose fairly operation rules for aggregating SvNCNs in an unbiased manner; (2) addressing the lack of SvNCN measurement research, we introduce preference distance and entropy measures for SvNCNs, then utilize the entropy measure to compute the objective weights of the attributes; (3) to effectively aggregate SvNCN information, inspired by the variable power geometric (VPG) operators and the extended power average (EPA) operators, we propose the variable extended power average (VEPA) operator for scientifically handling extreme values, extending it to SvNCNs with the SvNCNs fairly variable extended power average (SvNCNFVEPA) operators and their extended form; (4) we introduce the GRA method to calculate the degree of utility of alternatives relative to ideal and anti-ideal alternatives, forming the GRA-MARCOS method. This method can reflect both indicator differences and the similarity of alternatives, thereby rendering the evaluation results more scientific and objective; (5) to illustrate the application of the method to the MAGDM problem, we apply it to the example of DTP selection. Parameter sensitivity analysis and comparative analysis with other existing methods demonstrate that our proposed method is more scientific and flexible.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125703"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125673
Wentao Wang , Chen Ye , Zhongjie Pan , Jun Tian
Rice plant images exhibit varying texture characteristics across different growth stages and environmental conditions. A suitable image thresholding segmentation method can effectively separate the different feature regions of rice plants for better monitoring of rice growth to improve yield. This paper employs the Sigmoid non-linear weights strategy, Quadratic interpolation strategy, and elite swarm Genetic strategy to enhance distinct stages of the Tuna Swarm Optimization algorithm (TSO) to propose the SQGTSO algorithm, which has better convergence and global optimization capability. 10 CEC2017 benchmark functions are selected to validate the performance of the SQGTSO algorithm, and the experimental results show that the SQGTSO algorithm outperforms the other algorithms in 9 benchmark functions. To assess the feasibility and efficacy of the SQGTSO for multilevel threshold segmentation of rice plant images, this paper selects 8 rice plant images with diverse styles for the design of two sets of comparative experiments. The SQGTSO algorithm is comprehensively benchmarked against seven advanced metaheuristic algorithms and one machine learning method. Under the conditions of threshold levels ranging from 4 to 30, two distinct experiment sets are devised. In each set, Otsu’s method and the MCET method are employed as fitness functions for the metaheuristic algorithms, respectively. The assessment criteria include fitness values, PSNR, SSIM, FSIM and HPSI. Additionally, the Friedman method is utilized for statistical analysis of the five metrics yielded by each algorithm. The experimental findings demonstrate the significant advantages of the SQGTSO method concerning five evaluation metrics and its convergence performance compared to other competitors.
{"title":"Multilevel threshold segmentation of rice plant images utilizing tuna swarm optimization algorithm incorporating quadratic interpolation and elite swarm genetic operators","authors":"Wentao Wang , Chen Ye , Zhongjie Pan , Jun Tian","doi":"10.1016/j.eswa.2024.125673","DOIUrl":"10.1016/j.eswa.2024.125673","url":null,"abstract":"<div><div>Rice plant images exhibit varying texture characteristics across different growth stages and environmental conditions. A suitable image thresholding segmentation method can effectively separate the different feature regions of rice plants for better monitoring of rice growth to improve yield. This paper employs the Sigmoid non-linear weights strategy, Quadratic interpolation strategy, and elite swarm Genetic strategy to enhance distinct stages of the Tuna Swarm Optimization algorithm (TSO) to propose the SQGTSO algorithm, which has better convergence and global optimization capability. 10 CEC2017 benchmark functions are selected to validate the performance of the SQGTSO algorithm, and the experimental results show that the SQGTSO algorithm outperforms the other algorithms in 9 benchmark functions. To assess the feasibility and efficacy of the SQGTSO for multilevel threshold segmentation of rice plant images, this paper selects 8 rice plant images with diverse styles for the design of two sets of comparative experiments. The SQGTSO algorithm is comprehensively benchmarked against seven advanced metaheuristic algorithms and one machine learning method. Under the conditions of threshold levels ranging from 4 to 30, two distinct experiment sets are devised. In each set, Otsu’s method and the MCET method are employed as fitness functions for the metaheuristic algorithms, respectively. The assessment criteria include fitness values, PSNR, SSIM, FSIM and HPSI. Additionally, the Friedman method is utilized for statistical analysis of the five metrics yielded by each algorithm. The experimental findings demonstrate the significant advantages of the SQGTSO method concerning five evaluation metrics and its convergence performance compared to other competitors.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125673"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125607
HaiJian Zhang, Yiru Dai
When dealing with regular, simple Pareto fronts (PFs), the decomposition-based multi-objective optimization algorithm (MOEA/D) performs well by presetting a set of uniformly distributed weight vectors. However, its performance declines when faced with complex and irregular PFs. Many algorithms address this problem by periodically adjusting the distribution of the weight vectors, but these methods do not take into account the performance of the population and are likely to update the weight vectors at the wrong time. In addition, for the SBX crossover operator, the setting of its distribution index will largely affect the exploration and convergence ability of the algorithm, so a single parameter setting will have negative impacts. To tackle these challenges, this paper proposes a method to simultaneously adaptively update weight vectors and optimize SBX parameter via Q-learning(RL-MaOEA/D). In order to make the strategies made by Q-learning more accurate, Two different metrics (CD and NCD) are proposed that capture diversity and convergence of individual and population respectively. RL-MaOEA/D is compared with seven state-of-the-art algorithms on different problems, and the simulation results reflect that the proposed algorithm has better performance.
{"title":"A decomposition-based many-objective evolutionary algorithm with Q-learning guide weight vectors update","authors":"HaiJian Zhang, Yiru Dai","doi":"10.1016/j.eswa.2024.125607","DOIUrl":"10.1016/j.eswa.2024.125607","url":null,"abstract":"<div><div>When dealing with regular, simple Pareto fronts (PFs), the decomposition-based multi-objective optimization algorithm (MOEA/D) performs well by presetting a set of uniformly distributed weight vectors. However, its performance declines when faced with complex and irregular PFs. Many algorithms address this problem by periodically adjusting the distribution of the weight vectors, but these methods do not take into account the performance of the population and are likely to update the weight vectors at the wrong time. In addition, for the SBX crossover operator, the setting of its distribution index will largely affect the exploration and convergence ability of the algorithm, so a single parameter setting will have negative impacts. To tackle these challenges, this paper proposes a method to simultaneously adaptively update weight vectors and optimize SBX parameter via Q-learning(RL-MaOEA/D). In order to make the strategies made by Q-learning more accurate, Two different metrics (CD and NCD) are proposed that capture diversity and convergence of individual and population respectively. RL-MaOEA/D is compared with seven state-of-the-art algorithms on different problems, and the simulation results reflect that the proposed algorithm has better performance.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125607"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125720
Mohammad Sadrani , Alejandro Tirachini , Constantinos Antoniou
This paper focuses on optimizing mixed-fleet bus scheduling (MFBS) with vehicles of different sizes in public transport systems. We develop a novel mixed-integer nonlinear programming (MINLP) model to address the MFBS problem by optimizing vehicle assignment and dispatching programs. The model considers user costs, operator costs, and the crowding inconvenience of standing and sitting passengers. To tackle the complexity of the MFBS problem, we employ Genetic Algorithm (GA) and Grey Wolf Optimizer (GWO). Besides, we develop two hybrid metaheuristics, including GA-SA [a combination of GA and Simulated Annealing (SA)] and GWO-SA (a combination of GWO and SA), to improve optimization capabilities for the MFBS problem. We also employ a Taguchi approach to fine-tune the metaheuristics’ parameters. We widely examine and compare the metaheuristics’ performance across various-sized samples (small, medium, and large), considering solution quality, computational time, and the result stability of each algorithm. We also compare the metaheuristics’ solutions with the optimal solutions acquired by GAMS software in small and medium-scale samples. Our findings show that the GWO-SA outperforms the other metaheuristics. Applying our model to a real bus corridor in Santiago, Chile, we find that precise dispatching plans generated by more sophisticated/advanced algorithms (GA-SA and GWO-SA) lead to larger cost savings and improved performance compared to simpler algorithms (GA and GWO). Interestingly, utilizing more advanced algorithms makes a difference in terms of fleet planning in crowded scenarios, whereas for low and medium-demand cases, simpler dispatching algorithms could be used without a drop in accuracy.
{"title":"Bus scheduling with heterogeneous fleets: Formulation and hybrid metaheuristic algorithms","authors":"Mohammad Sadrani , Alejandro Tirachini , Constantinos Antoniou","doi":"10.1016/j.eswa.2024.125720","DOIUrl":"10.1016/j.eswa.2024.125720","url":null,"abstract":"<div><div>This paper focuses on optimizing mixed-fleet bus scheduling (MFBS) with vehicles of different sizes in public transport systems. We develop a novel mixed-integer nonlinear programming (MINLP) model to address the MFBS problem by optimizing vehicle assignment and dispatching programs. The model considers user costs, operator costs, and the crowding inconvenience of standing and sitting passengers. To tackle the complexity of the MFBS problem, we employ Genetic Algorithm (GA) and Grey Wolf Optimizer (GWO). Besides, we develop two hybrid metaheuristics, including GA-SA [a combination of GA and Simulated Annealing (SA)] and GWO-SA (a combination of GWO and SA), to improve optimization capabilities for the MFBS problem. We also employ a Taguchi approach to fine-tune the metaheuristics’ parameters. We widely examine and compare the metaheuristics’ performance across various-sized samples (small, medium, and large), considering solution quality, computational time, and the result stability of each algorithm. We also compare the metaheuristics’ solutions with the optimal solutions acquired by GAMS software in small and medium-scale samples. Our findings show that the GWO-SA outperforms the other metaheuristics. Applying our model to a real bus corridor in Santiago, Chile, we find that precise dispatching plans generated by more sophisticated/advanced algorithms (GA-SA and GWO-SA) lead to larger cost savings and improved performance compared to simpler algorithms (GA and GWO). Interestingly, utilizing more advanced algorithms makes a difference in terms of fleet planning in crowded scenarios, whereas for low and medium-demand cases, simpler dispatching algorithms could be used without a drop in accuracy.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125720"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125724
Mao Yang , Yunfeng Guo , Bo Wang , Zhao Wang , Rongfan Chai
Forecasting error of day-ahead wind speed (WS) seriously affects wind power integration and power system security and stability. In this regard, this paper fully considers the spatiotemporal correlation of wind farms (WFs) in different geographical locations, and proposes a day-ahead WS combined correction method that integrates multi-source station dynamic information weighting. Different from the previous WS correction methods, this paper fully considers the dynamic correlation of WS between the WFs, introduces an improved weighted similarity function to screen and dynamically weight the information of WFs with dynamic correlation, and introduces the dynamic weighting feature into the WS correction process. A combined decomposition mechanism is proposed, which combines sequential variational mode decomposition (SVMD) and feature mode decomposition (FMD) models to extract the most relevant trend components and non-stationary components of forecasted and measured WS. A combined correction model is introduced, and a combined architecture of Non-stationary Transformer combined with bidirectional long short-term memory network (Ns-Transformer-BILSTM) is used to correct the stationary WS component. A dynamic matching mechanism of fluctuation components considering improved similarity is proposed for the correction of non-stationary components. The proposed method is applied to several regional WFs in China. The experimental results show that the average correction of NRMSE, NMAE and R can reach 2.4 % ∼ 3.7 %, 2.0 % ∼ 3.0 % and 3.3 % ∼ 9.7 %, respectively. The NRMSE and NMAE corresponding to the corrected WS of certain individual WFs can be reduced by 10 % and 9 %, respectively, and R can be increased by 33 %.
{"title":"A day-ahead wind speed correction method: Enhancing wind speed forecasting accuracy using a strategy combining dynamic feature weighting with multi-source information and dynamic matching with improved similarity function","authors":"Mao Yang , Yunfeng Guo , Bo Wang , Zhao Wang , Rongfan Chai","doi":"10.1016/j.eswa.2024.125724","DOIUrl":"10.1016/j.eswa.2024.125724","url":null,"abstract":"<div><div>Forecasting error of day-ahead wind speed (WS) seriously affects wind power integration and power system security and stability. In this regard, this paper fully considers the spatiotemporal correlation of wind farms (WFs) in different geographical locations, and proposes a day-ahead WS combined correction method that integrates multi-source station dynamic information weighting. Different from the previous WS correction methods, this paper fully considers the dynamic correlation of WS between the WFs, introduces an improved weighted similarity function to screen and dynamically weight the information of WFs with dynamic correlation, and introduces the dynamic weighting feature into the WS correction process. A combined decomposition mechanism is proposed, which combines sequential variational mode decomposition (SVMD) and feature mode decomposition (FMD) models to extract the most relevant trend components and non-stationary components of forecasted and measured WS. A combined correction model is introduced, and a combined architecture of Non-stationary Transformer combined with bidirectional long short-term memory network (Ns-Transformer-BILSTM) is used to correct the stationary WS component. A dynamic matching mechanism of fluctuation components considering improved similarity is proposed for the correction of non-stationary components. The proposed method is applied to several regional WFs in China. The experimental results show that the average correction of <em>N<sub>RMSE</sub></em>, <em>N<sub>MAE</sub></em> and R can reach 2.4 % ∼ 3.7 %, 2.0 % ∼ 3.0 % and 3.3 % ∼ 9.7 %, respectively. The <em>N<sub>RMSE</sub></em> and <em>N<sub>MAE</sub></em> corresponding to the corrected WS of certain individual WFs can be reduced by 10 % and 9 %, respectively, and <em>R</em> can be increased by 33 %.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125724"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125700
Yuan Liu, Zhe Qu, Shu Wang, Chengchao Shen, Yixiong Liang, Jianxin Wang
Personalized Federated Learning (pFL) allows for the development of customized models for personalized information from multiple distributed domains. In real-world scenarios, some testing data may originate from new target domains (unseen domains) outside of the federated network, resulting in another learning task called Federated Domain Generalization (FedDG). In this paper, we aim to tackle the new problem, named Personalized Federated Domain Generalization (pFedDG), which not only protects the personalization but also obtains a general model for unseen target domains. We observe that pFL and FedDG objectives can conflict, posing challenges in addressing both objectives simultaneously. To sufficiently moderate the conflict, we develop a unified framework, named Personalized Federated Decoupled Representation (pFedDR), which decouples the two objectives using two separate loss functions simultaneously and uses an integrated predictor to serve both two learning tasks. Specifically, the framework decouples domain-sensitive layers linked to different representations and design an entropy increase loss to encourage the separation of two representations to achieve the pFedDG. Extensive experiments show that our pFedDR method achieves state-of-the-art performance for both tasks while incurring almost no increase in communication cost. Code is available at https://github.com/CSU-YL/pFedDR.
{"title":"A unified Personalized Federated Learning framework ensuring Domain Generalization","authors":"Yuan Liu, Zhe Qu, Shu Wang, Chengchao Shen, Yixiong Liang, Jianxin Wang","doi":"10.1016/j.eswa.2024.125700","DOIUrl":"10.1016/j.eswa.2024.125700","url":null,"abstract":"<div><div>Personalized Federated Learning (pFL) allows for the development of customized models for personalized information from multiple distributed domains. In real-world scenarios, some testing data may originate from new target domains (unseen domains) outside of the federated network, resulting in another learning task called Federated Domain Generalization (FedDG). In this paper, we aim to tackle the new problem, named <strong>Personalized Federated Domain Generalization (pFedDG)</strong>, which <em>not only protects the personalization but also obtains a general model for unseen target domains</em>. We observe that pFL and FedDG objectives can conflict, posing challenges in addressing both objectives simultaneously. To sufficiently moderate the conflict, we develop a unified framework, named <strong>Personalized Federated Decoupled Representation (pFedDR)</strong>, which decouples the two objectives using two separate loss functions simultaneously and uses an integrated predictor to serve both two learning tasks. Specifically, the framework decouples domain-sensitive layers linked to different representations and design an entropy increase loss to encourage the separation of two representations to achieve the pFedDG. Extensive experiments show that our pFedDR method achieves state-of-the-art performance for both tasks while incurring almost no increase in communication cost. Code is available at <span><span>https://github.com/CSU-YL/pFedDR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125700"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1016/j.eswa.2024.125665
Yuncan Ouyang , Hao Zhai , Hanyue Hu , Xiaohang Li , Zhi Zeng
In recent years, convolutional neural networks have demonstrated significant advancements in the domain of computer vision, effectively addressing numerous previously challenging issues. An increasing number of researchers are focusing their investigations on this field, proposing innovative network architectures. However, many existing networks necessitate intricate module designs and a substantial number of parameters to achieve satisfactory fusion outcomes, which poses challenges for lightweight devices with constrained computational resources. To mitigate this concern, the present study introduces a novel methodology that integrates block segmentation with pixel optimization. Specifically, we initially employ graph convolutional networks to execute flexible convolutions on large-scale, irregular regions generated through superpixel clustering, thereby achieving coarse segmentation at the block level. Subsequently, we utilize parallel lightweight convolutional networks to provide pixel-level guidance, ultimately resulting in a more accurate decision map. Furthermore, to leverage the strengths of both networks and facilitate the optimization of feature generation from the graph convolutional network for non-Euclidean data, we meticulously design a superpixel-based graph decoder alongside a pixel-based convolutional neural network extraction block to enhance feature acquisition and propagation. In comparison to numerous state-of-the-art methodologies, our approach demonstrates commendable competitiveness in both qualitative and quantitative analyses, as well as in efficiency evaluations. The code can be downloaded at https://github.com/ouyangbaicai/FusionGCN.
{"title":"FusionGCN: Multi-focus image fusion using superpixel features generation GCN and pixel-level feature reconstruction CNN","authors":"Yuncan Ouyang , Hao Zhai , Hanyue Hu , Xiaohang Li , Zhi Zeng","doi":"10.1016/j.eswa.2024.125665","DOIUrl":"10.1016/j.eswa.2024.125665","url":null,"abstract":"<div><div>In recent years, convolutional neural networks have demonstrated significant advancements in the domain of computer vision, effectively addressing numerous previously challenging issues. An increasing number of researchers are focusing their investigations on this field, proposing innovative network architectures. However, many existing networks necessitate intricate module designs and a substantial number of parameters to achieve satisfactory fusion outcomes, which poses challenges for lightweight devices with constrained computational resources. To mitigate this concern, the present study introduces a novel methodology that integrates block segmentation with pixel optimization. Specifically, we initially employ graph convolutional networks to execute flexible convolutions on large-scale, irregular regions generated through superpixel clustering, thereby achieving coarse segmentation at the block level. Subsequently, we utilize parallel lightweight convolutional networks to provide pixel-level guidance, ultimately resulting in a more accurate decision map. Furthermore, to leverage the strengths of both networks and facilitate the optimization of feature generation from the graph convolutional network for non-Euclidean data, we meticulously design a superpixel-based graph decoder alongside a pixel-based convolutional neural network extraction block to enhance feature acquisition and propagation. In comparison to numerous state-of-the-art methodologies, our approach demonstrates commendable competitiveness in both qualitative and quantitative analyses, as well as in efficiency evaluations. The code can be downloaded at <span><span>https://github.com/ouyangbaicai/FusionGCN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125665"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}