S. Pandey, A. S. Ranadive, Sovan Samanta, Vivek Kumar Dubey
Several methodologies have been proposed in the literature of graph theory for depicting collaboration among entities. However, in these studies, the measure of collaboration is taken based on the crisp graphical properties and discusses only its positive effects. In this manuscript, we discuss the simultaneous collaboration and competition that are observed among individuals, organizations, countries, communities and many others. The notion of bipolar fuzzy bunch graph (BFBG) is introduced in this study to effectively capture the positive and negative effects of both the terms collaboration and competition, which is jointly called coopetition. The goal of this paper is to introduce an improved representation and analytical measure for coopetition. To further enrich the literature on competition graphs, the notion of survival and winning competition among species has been introduced and also provides its bipolar fuzzy competition degrees. We also introduce two types of coopetition measures to understand the ranking structure of entities (i.e. which node batter collaborates and competes with other nodes) in the network: a) bipolar fuzzy coopetition degree and b) bipolar fuzzy coopatition index. In the form of a bipolar fuzzy coopetition graph, we find evidence to validate our framework and computations. We gathered research articles on COVID-19 and their citations over a specific time period from a specific journal. To demonstrate our approach, we displayed bipolar fuzzy collaboration and competition of various countries on COVID-19 and classified their rankings based on their positive and negative coopetition indices.
{"title":"A study on coopetition using bipolar fuzzy bunch graphs","authors":"S. Pandey, A. S. Ranadive, Sovan Samanta, Vivek Kumar Dubey","doi":"10.3233/jifs-234061","DOIUrl":"https://doi.org/10.3233/jifs-234061","url":null,"abstract":"Several methodologies have been proposed in the literature of graph theory for depicting collaboration among entities. However, in these studies, the measure of collaboration is taken based on the crisp graphical properties and discusses only its positive effects. In this manuscript, we discuss the simultaneous collaboration and competition that are observed among individuals, organizations, countries, communities and many others. The notion of bipolar fuzzy bunch graph (BFBG) is introduced in this study to effectively capture the positive and negative effects of both the terms collaboration and competition, which is jointly called coopetition. The goal of this paper is to introduce an improved representation and analytical measure for coopetition. To further enrich the literature on competition graphs, the notion of survival and winning competition among species has been introduced and also provides its bipolar fuzzy competition degrees. We also introduce two types of coopetition measures to understand the ranking structure of entities (i.e. which node batter collaborates and competes with other nodes) in the network: a) bipolar fuzzy coopetition degree and b) bipolar fuzzy coopatition index. In the form of a bipolar fuzzy coopetition graph, we find evidence to validate our framework and computations. We gathered research articles on COVID-19 and their citations over a specific time period from a specific journal. To demonstrate our approach, we displayed bipolar fuzzy collaboration and competition of various countries on COVID-19 and classified their rankings based on their positive and negative coopetition indices.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"21 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fees of different certification services are charged in different ways: For example, T-mall.com (one of the leading e-commerce platforms in China) uses a total certification service, where each type of seller participating in the platform must purchase certification services; Pinduoduo.com (another Chinese e-commerce platform) uses an alternative certification service, where after paying a transaction fee, each seller participating in the platform can choose whether to purchase certification services. This paper studies how the choice of certification services affects the participation decisions of both sellers and buyers, as well as the revenue and quality level (the proportion of high-quality sellers of all participating sellers) of a platform. According to previous research, network externalities also affect sellers’ and buyers’ participation strategies. Studies on the effectiveness of different certification services for e-commerce platforms have rarely considered both positive and negative network externalities. The results of constructed game-theoretic models show that both the certification capability and the certification cost play critical roles in determining which certification services can generate more revenue. If a platform provides certification services, the total certification service always generates a higher quality level than the alternative certification service. Furthermore, the applicable scope of certification services (defined as the certification strategy space), can be broadened by increasing both the profit ratio (the ratio between the profit of H-type sellers and L-type sellers) and the value ratio (the ratio between the value of H-type sellers and L-type sellers). Counterintuitively, a higher certification capability does not always yield a higher certification fee.
不同认证服务的收费方式不同:例如,天猫(中国领先的电子商务平台之一)采用的是全面认证服务,即参与该平台的每类卖家都必须购买认证服务;拼多多(中国另一个电子商务平台)采用的是替代认证服务,即参与该平台的每个卖家在支付交易费后,可以选择是否购买认证服务。本文研究认证服务的选择如何影响卖家和买家的参与决策,以及平台的收入和质量水平(所有参与卖家中高质量卖家的比例)。根据以往的研究,网络外部性也会影响卖家和买家的参与策略。关于电子商务平台不同认证服务有效性的研究很少同时考虑正负网络外部性。构建的博弈论模型结果表明,认证能力和认证成本在决定哪种认证服务能带来更多收入方面起着关键作用。如果一个平台提供认证服务,总认证服务的质量水平总是高于替代认证服务。此外,通过提高利润比(H 型卖家与 L 型卖家的利润之比)和价值比(H 型卖家与 L 型卖家的价值之比),可以扩大认证服务的适用范围(定义为认证策略空间)。与直觉相反的是,认证能力越高,认证费用也就越高。
{"title":"The effectiveness of certification services on e-commerce platforms considering network externalities","authors":"Xueke Du, Wenli Li, Xiaowen Wei","doi":"10.3233/jifs-234621","DOIUrl":"https://doi.org/10.3233/jifs-234621","url":null,"abstract":"The fees of different certification services are charged in different ways: For example, T-mall.com (one of the leading e-commerce platforms in China) uses a total certification service, where each type of seller participating in the platform must purchase certification services; Pinduoduo.com (another Chinese e-commerce platform) uses an alternative certification service, where after paying a transaction fee, each seller participating in the platform can choose whether to purchase certification services. This paper studies how the choice of certification services affects the participation decisions of both sellers and buyers, as well as the revenue and quality level (the proportion of high-quality sellers of all participating sellers) of a platform. According to previous research, network externalities also affect sellers’ and buyers’ participation strategies. Studies on the effectiveness of different certification services for e-commerce platforms have rarely considered both positive and negative network externalities. The results of constructed game-theoretic models show that both the certification capability and the certification cost play critical roles in determining which certification services can generate more revenue. If a platform provides certification services, the total certification service always generates a higher quality level than the alternative certification service. Furthermore, the applicable scope of certification services (defined as the certification strategy space), can be broadened by increasing both the profit ratio (the ratio between the profit of H-type sellers and L-type sellers) and the value ratio (the ratio between the value of H-type sellers and L-type sellers). Counterintuitively, a higher certification capability does not always yield a higher certification fee.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"123 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140237973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most valuable information of Hyperspectral Image (HSI) should be processed properly. Using dimensionality reduction techniques in two distinct approaches, we created a structure for HSI to collect physiological and diagnostic information. The tissue Oxygen Saturation Level (StO2) was extracted using the HSI approach as a physiological characteristic for stress detection. Our research findings suggest that this unique characteristic may not be affected by humidity or temperature in the environment. Comparing the standard StO2 reference and pressure concentrations, the social stress assessments showed a substantial variance and considerable practical differentiation. The proposed system has already been evaluated on tumor images from rats with head and neck cancers using a spectrum from 450 to 900 nm wavelength. The Fourier transformation was developed to improve precision, and normalize the brightness and mean spectrum components. The analysis of results showed that in a difficult situation where awareness could be inexpensive due to feature possibilities for rapid classification tasks and significant in measuring the structure of HSI analysis for cancer detection throughout the surgical resection of wildlife. Our proposed model improves performance measures such as reliability at 89.62% and accuracy at 95.26% when compared with existing systems.
{"title":"Applying dimensionality reduction methods to extract physiological and diagnostic features for clinical Hyperspectral Images","authors":"V. Lalitha, B. Latha","doi":"10.3233/jifs-236935","DOIUrl":"https://doi.org/10.3233/jifs-236935","url":null,"abstract":"The most valuable information of Hyperspectral Image (HSI) should be processed properly. Using dimensionality reduction techniques in two distinct approaches, we created a structure for HSI to collect physiological and diagnostic information. The tissue Oxygen Saturation Level (StO2) was extracted using the HSI approach as a physiological characteristic for stress detection. Our research findings suggest that this unique characteristic may not be affected by humidity or temperature in the environment. Comparing the standard StO2 reference and pressure concentrations, the social stress assessments showed a substantial variance and considerable practical differentiation. The proposed system has already been evaluated on tumor images from rats with head and neck cancers using a spectrum from 450 to 900 nm wavelength. The Fourier transformation was developed to improve precision, and normalize the brightness and mean spectrum components. The analysis of results showed that in a difficult situation where awareness could be inexpensive due to feature possibilities for rapid classification tasks and significant in measuring the structure of HSI analysis for cancer detection throughout the surgical resection of wildlife. Our proposed model improves performance measures such as reliability at 89.62% and accuracy at 95.26% when compared with existing systems.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"23 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140237327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces an innovative approach, the LS-SLM (Local Search with Smart Local Moving) technique, for enhancing the efficiency of article recommendation systems based on community detection and topic modeling. The methodology undergoes rigorous evaluation using a comprehensive dataset extracted from the “dblp. v12.json” citation network. Experimental results presented herein provide a clear depiction of the superior performance of the LS-SLM technique when compared to established algorithms, namely the Louvain Algorithm (LA), Stochastic Block Model (SBM), Fast Greedy Algorithm (FGA), and Smart Local Moving (SLM). The evaluation metrics include accuracy, precision, specificity, recall, F-Score, modularity, Normalized Mutual Information (NMI), betweenness centrality (BTC), and community detection time. Notably, the LS-SLM technique outperforms existing solutions across all metrics. For instance, the proposed methodology achieves an accuracy of 96.32%, surpassing LA by 16% and demonstrating a 10.6% improvement over SBM. Precision, a critical measure of relevance, stands at 96.32%, showcasing a significant advancement over GCR-GAN (61.7%) and CR-HBNE (45.9%). Additionally, sensitivity analysis reveals that the LS-SLM technique achieves the highest sensitivity value of 96.5487%, outperforming LA by 14.2%. The LS-SLM also demonstrates superior specificity and recall, with values of 96.5478% and 96.5487%, respectively. The modularity performance is exceptional, with LS-SLM obtaining 95.6119%, significantly outpacing SLM, FGA, SBM, and LA. Furthermore, the LS-SLM technique excels in community detection time, completing the process in 38,652 ms, showcasing efficiency gains over existing techniques. The BTC analysis indicates that LS-SLM achieves a value of 94.6650%, demonstrating its proficiency in controlling information flow within the network.
本文介绍了一种创新方法--LS-SLM(Local Search with Smart Local Moving)技术,用于提高基于社区检测和主题建模的文章推荐系统的效率。该方法使用从 "dp. v12.json "引文网络中提取的综合数据集进行了严格评估。本文介绍的实验结果清楚地表明,与卢万算法(LA)、随机块模型(SBM)、快速贪婪算法(FGA)和智能局部移动(SLM)等成熟算法相比,LS-SLM 技术的性能更加优越。评估指标包括准确度、精确度、特异性、召回率、F-Score、模块化、归一化互信息(NMI)、中心度(BTC)和群落检测时间。值得注意的是,LS-SLM 技术在所有指标上都优于现有解决方案。例如,拟议方法的准确率达到 96.32%,比 LA 高出 16%,比 SBM 高出 10.6%。精确度是衡量相关性的关键指标,达到 96.32%,比 GCR-GAN (61.7%)和 CR-HBNE (45.9%)有显著提高。此外,灵敏度分析表明,LS-SLM 技术的灵敏度最高,达到 96.5487%,比 LA 高出 14.2%。LS-SLM 的特异性和召回率也表现优异,分别为 96.5478% 和 96.5487%。模块化性能也非常出色,LS-SLM 的模块化率达到 95.6119%,大大超过 SLM、FGA、SBM 和 LA。此外,LS-SLM 技术在群落检测时间方面表现出色,仅用了 38652 毫秒就完成了检测过程,与现有技术相比效率大为提高。BTC分析表明,LS-SLM达到了94.6650%的值,证明了它在控制网络内信息流方面的能力。
{"title":"Effective community detection with topic modeling in article recommender systems using LS-SLM and PCC-LDA","authors":"Sandeep Kumar Rachamadugu, T.P. Pushphavathi","doi":"10.3233/jifs-233851","DOIUrl":"https://doi.org/10.3233/jifs-233851","url":null,"abstract":"This paper introduces an innovative approach, the LS-SLM (Local Search with Smart Local Moving) technique, for enhancing the efficiency of article recommendation systems based on community detection and topic modeling. The methodology undergoes rigorous evaluation using a comprehensive dataset extracted from the “dblp. v12.json” citation network. Experimental results presented herein provide a clear depiction of the superior performance of the LS-SLM technique when compared to established algorithms, namely the Louvain Algorithm (LA), Stochastic Block Model (SBM), Fast Greedy Algorithm (FGA), and Smart Local Moving (SLM). The evaluation metrics include accuracy, precision, specificity, recall, F-Score, modularity, Normalized Mutual Information (NMI), betweenness centrality (BTC), and community detection time. Notably, the LS-SLM technique outperforms existing solutions across all metrics. For instance, the proposed methodology achieves an accuracy of 96.32%, surpassing LA by 16% and demonstrating a 10.6% improvement over SBM. Precision, a critical measure of relevance, stands at 96.32%, showcasing a significant advancement over GCR-GAN (61.7%) and CR-HBNE (45.9%). Additionally, sensitivity analysis reveals that the LS-SLM technique achieves the highest sensitivity value of 96.5487%, outperforming LA by 14.2%. The LS-SLM also demonstrates superior specificity and recall, with values of 96.5478% and 96.5487%, respectively. The modularity performance is exceptional, with LS-SLM obtaining 95.6119%, significantly outpacing SLM, FGA, SBM, and LA. Furthermore, the LS-SLM technique excels in community detection time, completing the process in 38,652 ms, showcasing efficiency gains over existing techniques. The BTC analysis indicates that LS-SLM achieves a value of 94.6650%, demonstrating its proficiency in controlling information flow within the network.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140238953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Vaikunta Pai, Manmohan Singh, Nazeer Shaik, C. Ashokkumar, D. Anuradha, Amit Gangopadhyay, Goda Srinivasa Rao, T.Sunilkumar Reddy, D. Nagaraju
As the demand for energy in India continues to surge, accurate forecasting becomes paramount for efficient resource allocation and sustainable development. This study proposes an innovative approach to forecasting Indian primary energy demand by integrating Artificial Intelligence (AI) techniques with Fuzzy Auto-regressive Distributed Lag (FADL) models. FADL models, incorporating fuzzy logic, allow for a nuanced representation of uncertainties and complexities within the energy demand dynamics. In this research, historical energy consumption data is analysed using FADL models with both symmetric and non-symmetric triangular coefficients, enhancing the model’s adaptability to the inherent uncertainties associated with energy forecasting. This study addresses the urgent need for enhanced energy planning models in the context of sustainable development. Our research aims to provide a comprehensive framework for predicting future Total Final Consumption (TFC) in alignment with the Indian National Energy Plan’s net-zero emissions target by 2035. Recognizing the limitations of current models, our research introduces a novel approach that integrates advanced algorithms and methodologies, offering a more flexible and realistic assessment of TFC trends. The primary objective of this study is to develop an improved energy planning model that surpasses existing projections by incorporating sophisticated algorithms. We aim to refine
{"title":"AI-enhanced forecasting of Indian primary energy demand: Fuzzy auto-regressive distributed lag models","authors":"T. Vaikunta Pai, Manmohan Singh, Nazeer Shaik, C. Ashokkumar, D. Anuradha, Amit Gangopadhyay, Goda Srinivasa Rao, T.Sunilkumar Reddy, D. Nagaraju","doi":"10.3233/jifs-240729","DOIUrl":"https://doi.org/10.3233/jifs-240729","url":null,"abstract":"As the demand for energy in India continues to surge, accurate forecasting becomes paramount for efficient resource allocation and sustainable development. This study proposes an innovative approach to forecasting Indian primary energy demand by integrating Artificial Intelligence (AI) techniques with Fuzzy Auto-regressive Distributed Lag (FADL) models. FADL models, incorporating fuzzy logic, allow for a nuanced representation of uncertainties and complexities within the energy demand dynamics. In this research, historical energy consumption data is analysed using FADL models with both symmetric and non-symmetric triangular coefficients, enhancing the model’s adaptability to the inherent uncertainties associated with energy forecasting. This study addresses the urgent need for enhanced energy planning models in the context of sustainable development. Our research aims to provide a comprehensive framework for predicting future Total Final Consumption (TFC) in alignment with the Indian National Energy Plan’s net-zero emissions target by 2035. Recognizing the limitations of current models, our research introduces a novel approach that integrates advanced algorithms and methodologies, offering a more flexible and realistic assessment of TFC trends. The primary objective of this study is to develop an improved energy planning model that surpasses existing projections by incorporating sophisticated algorithms. We aim to refine","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140239457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An improved genetic algorithm is proposed for the Job Shop Scheduling Problem with Minimum Total Weight Tardiness (JSSP/TWT). In the proposed improved genetic algorithm, a decoding method based on the Minimum Local Tardiness (MLT) rule of the job is proposed by using the commonly used chromosome coding method of job numbering, and a chromosome recombination operator based on the decoding of the MLT rule is added to the basic genetic algorithm flow. As a way to enhance the quality of the initialized population, a non-delay scheduling combined with heuristic rules for population initialization. and a PiMX (Precedence in Machine crossover) crossover operator based on the priority of processing on the machine is designed. Comparison experiments of simulation scheduling under different algorithm configurations are conducted for randomly generated larger scale JSSP/TWT. Statistical analysis of the experimental evidence indicates that the genetic algorithm based on the above three improvements exhibits significantly superior performance for JSSP/TWT solving: faster convergence and better scheduling solutions can be obtained.
{"title":"Improved genetic algorithm for solving the total weight tardiness job shop scheduling problem","authors":"Hanpeng Wang, Hengen Xiong","doi":"10.3233/jifs-236712","DOIUrl":"https://doi.org/10.3233/jifs-236712","url":null,"abstract":"An improved genetic algorithm is proposed for the Job Shop Scheduling Problem with Minimum Total Weight Tardiness (JSSP/TWT). In the proposed improved genetic algorithm, a decoding method based on the Minimum Local Tardiness (MLT) rule of the job is proposed by using the commonly used chromosome coding method of job numbering, and a chromosome recombination operator based on the decoding of the MLT rule is added to the basic genetic algorithm flow. As a way to enhance the quality of the initialized population, a non-delay scheduling combined with heuristic rules for population initialization. and a PiMX (Precedence in Machine crossover) crossover operator based on the priority of processing on the machine is designed. Comparison experiments of simulation scheduling under different algorithm configurations are conducted for randomly generated larger scale JSSP/TWT. Statistical analysis of the experimental evidence indicates that the genetic algorithm based on the above three improvements exhibits significantly superior performance for JSSP/TWT solving: faster convergence and better scheduling solutions can be obtained.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"91 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140238398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengfei Ma, Xiaolei Yang, Heng Lu, Siyuan He, Yongshan Liu
When calculating participants’ contribution to federated learning, addressing issues such as the inability to collect complete test data and the impact of malicious and dishonest participants on the global model is necessary. This article proposes a federated aggregation method based on cosine similarity approximation Shapley value method contribution degree. Firstly, a participant contribution calculation model combining cosine similarity and the approximate Shapley value method was designed to obtain the contribution values of the participants. Then, based on the calculation model of participant contribution, a federated aggregation algorithm is proposed, and the aggregation weights of each participant in the federated aggregation process are calculated by their contribution values. Finally, the gradient parameters of the global model were determined and propagated to all participants to update the local model. Experiments were conducted under different privacy protection parameters, data noise parameters, and the proportion of malicious participants. The results showed that the accuracy of the algorithm model can be maintained at 90% and 65% on the MNIST and CIFAR-10 datasets, respectively. This method can reasonably and accurately calculate the contribution of participants without a complete test dataset, reducing computational costs to a certain extent and can resist the influence of the aforementioned participants.
{"title":"Federated aggregation method based on cosine similarity approximation Shapley value method contribution degree","authors":"Chengfei Ma, Xiaolei Yang, Heng Lu, Siyuan He, Yongshan Liu","doi":"10.3233/jifs-236977","DOIUrl":"https://doi.org/10.3233/jifs-236977","url":null,"abstract":"When calculating participants’ contribution to federated learning, addressing issues such as the inability to collect complete test data and the impact of malicious and dishonest participants on the global model is necessary. This article proposes a federated aggregation method based on cosine similarity approximation Shapley value method contribution degree. Firstly, a participant contribution calculation model combining cosine similarity and the approximate Shapley value method was designed to obtain the contribution values of the participants. Then, based on the calculation model of participant contribution, a federated aggregation algorithm is proposed, and the aggregation weights of each participant in the federated aggregation process are calculated by their contribution values. Finally, the gradient parameters of the global model were determined and propagated to all participants to update the local model. Experiments were conducted under different privacy protection parameters, data noise parameters, and the proportion of malicious participants. The results showed that the accuracy of the algorithm model can be maintained at 90% and 65% on the MNIST and CIFAR-10 datasets, respectively. This method can reasonably and accurately calculate the contribution of participants without a complete test dataset, reducing computational costs to a certain extent and can resist the influence of the aforementioned participants.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"5 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140239623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object detection has made significant strides in recent years, but it remains a challenging task to accurately and quickly identify and detect objects. While humans can easily recognize objects in images or videos regardless of their appearance, computers face difficulties in this task. Object detection plays a crucial role in computer vision and finds applications in various domains such as healthcare, security, agriculture, home automation and more. To address the challenges of object detection, several techniques have been developed including RCNN, Faster RCNN, YOLO and Single Shot Detector (SSD). In this paper, we propose a modified YOLOv5s architecture that aims to improve detection performance. Our modified architecture incorporates the C3Ghost module along with the SPP and SPPF modules in the YOLOv5s backbone network. We also utilize the Adam and Stochastic Gradient Descent (SGD) optimizers. The paper also provides an overview of three major versions of the YOLO object detection model: YOLOv3, YOLOv4 and YOLOv5. We discussed their respective performance analyses. For our evaluation, we collected a database of pig images from the ICAR-National Research Centre on Pig farm. We assessed the performance using four metrics such as Precision (P), Recall (R), F1-score and mAP @ 0.50. The computational results demonstrate that our method YOLOv5s architecture achieves a 0.0414 higher mAP while utilizing less memory space compared to the original YOLOv5s architecture. This research contributes to the advancement of object detection techniques and showcases the potential of our modified YOLOv5s architecture for improved performance in real world applications.
{"title":"Detection of an in-housed pig using modified YOLOv5 model","authors":"Salam Jayachitra Devi, Juwar Doley, Vivek Kumar Gupta","doi":"10.3233/jifs-231032","DOIUrl":"https://doi.org/10.3233/jifs-231032","url":null,"abstract":" Object detection has made significant strides in recent years, but it remains a challenging task to accurately and quickly identify and detect objects. While humans can easily recognize objects in images or videos regardless of their appearance, computers face difficulties in this task. Object detection plays a crucial role in computer vision and finds applications in various domains such as healthcare, security, agriculture, home automation and more. To address the challenges of object detection, several techniques have been developed including RCNN, Faster RCNN, YOLO and Single Shot Detector (SSD). In this paper, we propose a modified YOLOv5s architecture that aims to improve detection performance. Our modified architecture incorporates the C3Ghost module along with the SPP and SPPF modules in the YOLOv5s backbone network. We also utilize the Adam and Stochastic Gradient Descent (SGD) optimizers. The paper also provides an overview of three major versions of the YOLO object detection model: YOLOv3, YOLOv4 and YOLOv5. We discussed their respective performance analyses. For our evaluation, we collected a database of pig images from the ICAR-National Research Centre on Pig farm. We assessed the performance using four metrics such as Precision (P), Recall (R), F1-score and mAP @ 0.50. The computational results demonstrate that our method YOLOv5s architecture achieves a 0.0414 higher mAP while utilizing less memory space compared to the original YOLOv5s architecture. This research contributes to the advancement of object detection techniques and showcases the potential of our modified YOLOv5s architecture for improved performance in real world applications.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"26 44","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dabin Zhang, Zehui Yu, Liwen Ling, Huanling Hu, Ruibin Lin
As CO2 emissions continue to rise, the problem of global warming is becoming increasingly serious. It is important to provide a robust management decision-making basis for the reductions of carbon emissions worldwide by predicting carbon emissions accurately. However, affected by various factors, the prediction of carbon emissions is challenging due to its nonlinear and nonstationary characteristics. Thus, we propose a combination forecast model, named CEEMDAN-GWO-SVR, which incorporates multiple features to predict trends in China’s carbon emissions. First, the impact of online search attention and public health emergencies are considered in carbon emissions prediction. Since the impact of different variables on carbon emissions is lagged, the grey relational degree is used to identify the appropriate lag series. Second, irrelevant features are eliminated through RFECV. To address the issue of feature redundancy of online search attention, we propose a dimensionality reduction method based on keyword classification. Finally, to evaluate the features of the proposed framework, four evaluation indicators are tested in multiple machine learning models. The best-performed model (SVR) is optimized by CEEMDAN and GWO to enhance prediction accuracy. The empirical results indicate that the proposed framework maintains good performance in both multi-scenario and multi-step prediction.
{"title":"A combined framework for carbon emissions prediction integrating online search attention","authors":"Dabin Zhang, Zehui Yu, Liwen Ling, Huanling Hu, Ruibin Lin","doi":"10.3233/jifs-236451","DOIUrl":"https://doi.org/10.3233/jifs-236451","url":null,"abstract":"As CO2 emissions continue to rise, the problem of global warming is becoming increasingly serious. It is important to provide a robust management decision-making basis for the reductions of carbon emissions worldwide by predicting carbon emissions accurately. However, affected by various factors, the prediction of carbon emissions is challenging due to its nonlinear and nonstationary characteristics. Thus, we propose a combination forecast model, named CEEMDAN-GWO-SVR, which incorporates multiple features to predict trends in China’s carbon emissions. First, the impact of online search attention and public health emergencies are considered in carbon emissions prediction. Since the impact of different variables on carbon emissions is lagged, the grey relational degree is used to identify the appropriate lag series. Second, irrelevant features are eliminated through RFECV. To address the issue of feature redundancy of online search attention, we propose a dimensionality reduction method based on keyword classification. Finally, to evaluate the features of the proposed framework, four evaluation indicators are tested in multiple machine learning models. The best-performed model (SVR) is optimized by CEEMDAN and GWO to enhance prediction accuracy. The empirical results indicate that the proposed framework maintains good performance in both multi-scenario and multi-step prediction.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"8 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140244693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new bootstrap-aggregating (bagging) ensemble learning algorithm is proposed based on classification certainty and semantic correlation to improve the classification accuracy of ensemble learning. First, two predetermined thresholds are introduced to construct the long and short-text sample subsets, and different deep learning methods are compared to construct the optimal base classifier groups for each sample subsets. Then, the random sampling method employed in traditional bagging classification algorithms is improved, and a threshold group based random sampling method is proposed to obtain long and short training sample subsets of each iteration. Finally, the sample classification certainty of the base classifiers for different categories is defined, and the semantic correlation information is integrated with the traditional weighted voting classifier ensemble method to avoid the loss of important information during the sampling process. The experimental results on multiple datasets demonstrate that the algorithm significantly improves text classification accuracy and outperforms typical deep learning algorithms. The proposed algorithm achieves the improvements of approximately 0.082, 0.061 and 0.019 on CNews dataset when the F1 measurement is used over the traditional ensemble learning algorithms such as random forest, M_ADA_A_SMV and CNN_SVM_LR. Moreover, it achieves the best F1 values of 0.995, 0.985, and 0.989 on the datasets of Spam, CNews, and SogouCS datasets, respectively, when compared with the ensemble learning algorithms using different base classifiers.
基于分类确定性和语义相关性,提出了一种新的引导-聚合(bagging)集合学习算法,以提高集合学习的分类准确性。首先,引入两个预定阈值来构建长、短文本样本子集,并比较不同的深度学习方法,为每个样本子集构建最优的基础分类器组。然后,改进了传统袋式分类算法中采用的随机抽样方法,提出了一种基于阈值组的随机抽样方法,以获得每次迭代的长文和短文训练样本子集。最后,定义了基础分类器对不同类别的样本分类确定性,并将语义相关信息与传统的加权投票分类器集合方法相结合,避免了抽样过程中重要信息的丢失。在多个数据集上的实验结果表明,该算法显著提高了文本分类的准确性,并优于典型的深度学习算法。与随机森林、M_ADA_A_SMV 和 CNN_SVM_LR 等传统的集合学习算法相比,在 CNews 数据集上使用 F1 测量时,提出的算法分别提高了约 0.082、0.061 和 0.019。此外,与使用不同基础分类器的集合学习算法相比,它在垃圾邮件数据集、CNews 数据集和 SogouCS 数据集上的最佳 F1 值分别为 0.995、0.985 和 0.989。
{"title":"New ensemble learning algorithm based on classification certainty and semantic correlation","authors":"You-wei Wang, Lizhou Feng","doi":"10.3233/jifs-236422","DOIUrl":"https://doi.org/10.3233/jifs-236422","url":null,"abstract":"A new bootstrap-aggregating (bagging) ensemble learning algorithm is proposed based on classification certainty and semantic correlation to improve the classification accuracy of ensemble learning. First, two predetermined thresholds are introduced to construct the long and short-text sample subsets, and different deep learning methods are compared to construct the optimal base classifier groups for each sample subsets. Then, the random sampling method employed in traditional bagging classification algorithms is improved, and a threshold group based random sampling method is proposed to obtain long and short training sample subsets of each iteration. Finally, the sample classification certainty of the base classifiers for different categories is defined, and the semantic correlation information is integrated with the traditional weighted voting classifier ensemble method to avoid the loss of important information during the sampling process. The experimental results on multiple datasets demonstrate that the algorithm significantly improves text classification accuracy and outperforms typical deep learning algorithms. The proposed algorithm achieves the improvements of approximately 0.082, 0.061 and 0.019 on CNews dataset when the F1 measurement is used over the traditional ensemble learning algorithms such as random forest, M_ADA_A_SMV and CNN_SVM_LR. Moreover, it achieves the best F1 values of 0.995, 0.985, and 0.989 on the datasets of Spam, CNews, and SogouCS datasets, respectively, when compared with the ensemble learning algorithms using different base classifiers.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"11 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140242189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}