The existing YOLOv5-based framework has achieved great success in the field of target detection. However, in forest fire detection tasks, there are few high-quality forest fire images available, and the performance of the YOLO model has suffered a serious decline in detecting small-scale forest fires. Making full use of context information can effectively improve the performance of small target detection. To this end, this paper proposes a new graph-embedded YOLOv5 forest fire detection framework, which can improve the performance of small-scale forest fire detection using different scales of context information. To mine local context information, we design a spatial graph convolution operation based on the message passing neural network (MPNN) mechanism. To utilize global context information, we introduce a multi-head self-attention (MSA) module before each YOLO head. The experimental results on FLAME and our self-built fire dataset show that our proposed model improves the accuracy of small-scale forest fire detection. The proposed model achieves high performance in real-time performance by fully utilizing the advantages of the YOLOv5 framework.
{"title":"An effective graph embedded YOLOv5 model for forest fire detection","authors":"Hui Yuan, Zhumao Lu, Ruizhe Zhang, Jinsong Li, Shuai Wang, Jingjing Fan","doi":"10.1111/coin.12640","DOIUrl":"https://doi.org/10.1111/coin.12640","url":null,"abstract":"<p>The existing YOLOv5-based framework has achieved great success in the field of target detection. However, in forest fire detection tasks, there are few high-quality forest fire images available, and the performance of the YOLO model has suffered a serious decline in detecting small-scale forest fires. Making full use of context information can effectively improve the performance of small target detection. To this end, this paper proposes a new graph-embedded YOLOv5 forest fire detection framework, which can improve the performance of small-scale forest fire detection using different scales of context information. To mine local context information, we design a spatial graph convolution operation based on the message passing neural network (MPNN) mechanism. To utilize global context information, we introduce a multi-head self-attention (MSA) module before each YOLO head. The experimental results on FLAME and our self-built fire dataset show that our proposed model improves the accuracy of small-scale forest fire detection. The proposed model achieves high performance in real-time performance by fully utilizing the advantages of the YOLOv5 framework.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the application of traditional deep learning methods in the agricultural field using remote sensing techniques, such as crop area and growth monitoring, crop classification, and agricultural disaster monitoring, has been greatly facilitated by advancements in deep learning. The accuracy of image classification plays a crucial role in these applications. Although traditional deep learning methods have achieved significant success in remote sensing image classification, they often involve convolutional neural networks with a large number of parameters that require extensive optimization using numerous remote sensing images for training purposes. To address these challenges, we propose a novel approach called multiscale attention network (MAN) for sample-based remote sensing image classification. This method consists primarily of feature extractors and attention modules to effectively utilize different scale features through multiscale feature training during the training phase. We evaluate our proposed method on three datasets comprising agricultural remote sensing images and observe superior performance compared to existing approaches. Furthermore, we validate its generalizability by testing it on an oil well indicator diagram specifically designed for classification tasks.
{"title":"Multiscale attention for few-shot image classification","authors":"Tong Zhou, Changyin Dong, Junshu Song, Zhiqiang Zhang, Zhen Wang, Bo Chang, Dechun Chen","doi":"10.1111/coin.12639","DOIUrl":"https://doi.org/10.1111/coin.12639","url":null,"abstract":"<p>In recent years, the application of traditional deep learning methods in the agricultural field using remote sensing techniques, such as crop area and growth monitoring, crop classification, and agricultural disaster monitoring, has been greatly facilitated by advancements in deep learning. The accuracy of image classification plays a crucial role in these applications. Although traditional deep learning methods have achieved significant success in remote sensing image classification, they often involve convolutional neural networks with a large number of parameters that require extensive optimization using numerous remote sensing images for training purposes. To address these challenges, we propose a novel approach called multiscale attention network (MAN) for sample-based remote sensing image classification. This method consists primarily of feature extractors and attention modules to effectively utilize different scale features through multiscale feature training during the training phase. We evaluate our proposed method on three datasets comprising agricultural remote sensing images and observe superior performance compared to existing approaches. Furthermore, we validate its generalizability by testing it on an oil well indicator diagram specifically designed for classification tasks.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Power line inspection is an important means to eliminate hidden dangers of power lines. It is a difficult research problem how to solve the low accuracy of power line inspection based on deep neural network (DNN) due to the problems of multi-view-shape, small-size object. In this paper, an automatic detection model based on Feature visual clustering network (FVCNet) for power line inspection is established. First, an unsupervised clustering method for power line inspection is proposed, and applied to construct a detection model which can recognize multi-view-shape objects and enhanced object features. Then, the bilinear interpolation method is used to Feature enhancement method, and the enhanced high-level semantics and low-level semantics are fused to solve the problems of small object size and single sample. In this paper, FVCNet is applied to the MS-COCO 2017 data set and self-made power line inspection data set, and the test accuracy is increased to 61.2% and 82.0%, respectively. Compared with other models, especially for the categories that are greatly affected by multi-view-shape, the test accuracy has been improved significantly.
{"title":"FVCNet: Detection obstacle method based on feature visual clustering network in power line inspection","authors":"Qiu-Yu Wang, Xian-Long Lv, Shi-Kai Tang","doi":"10.1111/coin.12634","DOIUrl":"https://doi.org/10.1111/coin.12634","url":null,"abstract":"<p>Power line inspection is an important means to eliminate hidden dangers of power lines. It is a difficult research problem how to solve the low accuracy of power line inspection based on deep neural network (DNN) due to the problems of multi-view-shape, small-size object. In this paper, an automatic detection model based on Feature visual clustering network (FVCNet) for power line inspection is established. First, an unsupervised clustering method for power line inspection is proposed, and applied to construct a detection model which can recognize multi-view-shape objects and enhanced object features. Then, the bilinear interpolation method is used to Feature enhancement method, and the enhanced high-level semantics and low-level semantics are fused to solve the problems of small object size and single sample. In this paper, FVCNet is applied to the MS-COCO 2017 data set and self-made power line inspection data set, and the test accuracy is increased to 61.2% and 82.0%, respectively. Compared with other models, especially for the categories that are greatly affected by multi-view-shape, the test accuracy has been improved significantly.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual question answering (VQA) is a challenging task in computer vision. Recently, there has been a growing interest in text-based VQA tasks, emphasizing the important role of textual information for better understanding of images. Effectively utilizing text information within the image is crucial for achieving success in this task. However, existing approaches often overlook the contextual information and neglect to utilize the relationships between scene-text tokens and image objects. They simply incorporate the scene-text tokens mined from the image into the VQA model without considering these important factors. In this paper, the proposed model initially analyzes the image to extract text and identify scene objects. It then comprehends the question and mines relationships among the question, OCRed text, and scene objects, ultimately generating an answer through relational reasoning by conducting semantic and positional attention. Our decoder with attention map loss enables prediction of complex answers and handles dynamic vocabularies, reducing decoding space. It outperforms softmax-based cross entropy loss in accuracy and efficiency by accommodating varying vocabulary sizes. We evaluated our model's performance on the TextVQA dataset and achieved an accuracy of 53.91% on the validation set and 53.98% on the test set. Moreover, on the ST-VQA dataset, our model obtained ANLS scores of 0.699 on the validation set and 0.692 on the test set.
{"title":"Enhancing scene-text visual question answering with relational reasoning, attention and dynamic vocabulary integration","authors":"Mayank Agrawal, Anand Singh Jalal, Himanshu Sharma","doi":"10.1111/coin.12635","DOIUrl":"https://doi.org/10.1111/coin.12635","url":null,"abstract":"<p>Visual question answering (VQA) is a challenging task in computer vision. Recently, there has been a growing interest in text-based VQA tasks, emphasizing the important role of textual information for better understanding of images. Effectively utilizing text information within the image is crucial for achieving success in this task. However, existing approaches often overlook the contextual information and neglect to utilize the relationships between scene-text tokens and image objects. They simply incorporate the scene-text tokens mined from the image into the VQA model without considering these important factors. In this paper, the proposed model initially analyzes the image to extract text and identify scene objects. It then comprehends the question and mines relationships among the question, OCRed text, and scene objects, ultimately generating an answer through relational reasoning by conducting semantic and positional attention. Our decoder with attention map loss enables prediction of complex answers and handles dynamic vocabularies, reducing decoding space. It outperforms softmax-based cross entropy loss in accuracy and efficiency by accommodating varying vocabulary sizes. We evaluated our model's performance on the TextVQA dataset and achieved an accuracy of 53.91% on the validation set and 53.98% on the test set. Moreover, on the ST-VQA dataset, our model obtained ANLS scores of 0.699 on the validation set and 0.692 on the test set.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139915719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph-structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation, which arises from the frequent exchange of graphic data during model training, especially in limited communication scenarios. To address this issue, we propose a novel framework of federated graph neural networks, where multiple mobile users collaboratively train the global model of graph neural networks in a federated way. The utilization of federated learning into the training of graph neural networks can help reduce the communication overhead of the system and protect the data privacy of local users. In addition, the federated training can help reduce the system computational complexity significantly. We further introduce a greedy-based user selection for the federated graph neural networks, where the wireless bandwidth is dynamically allocated among users to encourage more users to attend the federated training of neural networks. We perform the convergence analysis on the federated training of neural networks, in order to obtain some more insights on the impact of critical parameters on the system design. Finally, we perform the simulations on the coriolis ocean for reAnalysis (CORA) dataset and show the advantages of the proposed method in this paper.
{"title":"Greedy-based user selection for federated graph neural networks with limited communication resources","authors":"Hancong Huangfu, Zizhen Zhang","doi":"10.1111/coin.12637","DOIUrl":"https://doi.org/10.1111/coin.12637","url":null,"abstract":"<p>Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph-structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation, which arises from the frequent exchange of graphic data during model training, especially in limited communication scenarios. To address this issue, we propose a novel framework of federated graph neural networks, where multiple mobile users collaboratively train the global model of graph neural networks in a federated way. The utilization of federated learning into the training of graph neural networks can help reduce the communication overhead of the system and protect the data privacy of local users. In addition, the federated training can help reduce the system computational complexity significantly. We further introduce a greedy-based user selection for the federated graph neural networks, where the wireless bandwidth is dynamically allocated among users to encourage more users to attend the federated training of neural networks. We perform the convergence analysis on the federated training of neural networks, in order to obtain some more insights on the impact of critical parameters on the system design. Finally, we perform the simulations on the coriolis ocean for reAnalysis (CORA) dataset and show the advantages of the proposed method in this paper.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139915720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a hybrid blockchain-based authentication scheme is proposed that provides the mechanism to authenticate the randomly distributed sensor IoTs. These nodes are divided into three types: ordinary nodes, cluster heads and sink nodes. For authentication of these nodes in a Wireless Sensor IoTs (WSIoTs), a hybrid blockchain model is introduced. It consists of both private and public blockchains, which are used to authenticate ordinary nodes and cluster heads, respectively. Moreover, to handle the issue of cluster head failure due to inefficient energy consumption, Improved Heterogeneous Gateway-based Energy-Aware Multi-hop Routing (I-HMGEAR) protocol is proposed in combination with blockchain. It provides a mechanism to efficiently use the overall energy of the network. Besides, the processed data of subnetworks is stored on blockchain that causes the issue of increased monetary cost. To solve this issue, an external platform known as InterPlanetary File System (IPFS) is used, which distributively stores the data on different devices. The simulation results show that our proposed model outperforms existing clustering scheme in terms of network lifetime and data storage cost of the WSIoTs. Our proposed scheme increases the lifetime of the network as compared to existing trust management model, intrusion prevention and multi WSN authentication schemes by 17.5%, 24.2% and 19.6%, respectively.
{"title":"Energy optimization with authentication and cost effective storage in the wireless sensor IoTs using blockchain","authors":"Turki Ali Alghamdi, Nadeem Javaid","doi":"10.1111/coin.12630","DOIUrl":"https://doi.org/10.1111/coin.12630","url":null,"abstract":"<p>In this paper, a hybrid blockchain-based authentication scheme is proposed that provides the mechanism to authenticate the randomly distributed sensor IoTs. These nodes are divided into three types: ordinary nodes, cluster heads and sink nodes. For authentication of these nodes in a Wireless Sensor IoTs (WSIoTs), a hybrid blockchain model is introduced. It consists of both private and public blockchains, which are used to authenticate ordinary nodes and cluster heads, respectively. Moreover, to handle the issue of cluster head failure due to inefficient energy consumption, Improved Heterogeneous Gateway-based Energy-Aware Multi-hop Routing (I-HMGEAR) protocol is proposed in combination with blockchain. It provides a mechanism to efficiently use the overall energy of the network. Besides, the processed data of subnetworks is stored on blockchain that causes the issue of increased monetary cost. To solve this issue, an external platform known as InterPlanetary File System (IPFS) is used, which distributively stores the data on different devices. The simulation results show that our proposed model outperforms existing clustering scheme in terms of network lifetime and data storage cost of the WSIoTs. Our proposed scheme increases the lifetime of the network as compared to existing trust management model, intrusion prevention and multi WSN authentication schemes by 17.5%, 24.2% and 19.6%, respectively.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139732347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaha Al-Otaibi, Rahim Khan, Jehad Ali, Aftab Ahmed
Although the Internet of Things (IoT) has been considered one of the most promising technologies to automate various daily life activities, that is, monitoring and prediction, it has become extremely useful for problem solving with the introduction and integration of artificial intelligence (AI)-enabled smart learning methodologies. Therefore, due to their overwhelming characteristics, AI-enabled IoTs have been used in different application environments, such as agriculture, where detection, prevention (if possible), and prediction of crop diseases, especially at the earliest possible stage, are desperately required. Bacterial stalk root is a common disease of tomatoes that severely affects its production and yield if necessary measures are not taken. In this article, AI and an IoT-enabled decision support system (DSS) have been developed to predict the possible occurrence of bacterial stalk root diseases through a sophisticated technological infrastructure. For this purpose, Arduino agricultural boards, preferably with necessary embedded sensors, are deployed in the agricultural field of maize crops to capture valuable data at a certain time interval and send it to a centralized module where AI-based DSS, which is trained on an equally similar data set, is implemented to thoroughly examine captured data values for the possible occurrence of the disease. Additionally, the proposed AI- and IoT-enabled DSS has been tested on benchmark data sets, that is, freely available online, along with real-time captured data sets. Both experimental and simulation results show that the proposed scheme has achieved the highest accuracy level in timely prediction of the underlined disease. Finally, maize crop plots with the proposed system have significantly increased the yield (production) ratio of crops.
{"title":"Artificial intelligence and Internet of Things-enabled decision support system for the prediction of bacterial stalk root disease in maize crop","authors":"Shaha Al-Otaibi, Rahim Khan, Jehad Ali, Aftab Ahmed","doi":"10.1111/coin.12632","DOIUrl":"https://doi.org/10.1111/coin.12632","url":null,"abstract":"<p>Although the Internet of Things (IoT) has been considered one of the most promising technologies to automate various daily life activities, that is, monitoring and prediction, it has become extremely useful for problem solving with the introduction and integration of artificial intelligence (AI)-enabled smart learning methodologies. Therefore, due to their overwhelming characteristics, AI-enabled IoTs have been used in different application environments, such as agriculture, where detection, prevention (if possible), and prediction of crop diseases, especially at the earliest possible stage, are desperately required. Bacterial stalk root is a common disease of tomatoes that severely affects its production and yield if necessary measures are not taken. In this article, AI and an IoT-enabled decision support system (DSS) have been developed to predict the possible occurrence of bacterial stalk root diseases through a sophisticated technological infrastructure. For this purpose, Arduino agricultural boards, preferably with necessary embedded sensors, are deployed in the agricultural field of maize crops to capture valuable data at a certain time interval and send it to a centralized module where AI-based DSS, which is trained on an equally similar data set, is implemented to thoroughly examine captured data values for the possible occurrence of the disease. Additionally, the proposed AI- and IoT-enabled DSS has been tested on benchmark data sets, that is, freely available online, along with real-time captured data sets. Both experimental and simulation results show that the proposed scheme has achieved the highest accuracy level in timely prediction of the underlined disease. Finally, maize crop plots with the proposed system have significantly increased the yield (production) ratio of crops.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139732247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangsawang O, Chanta S. Capacitated single-allocation hub location model for a flood relief distribution network. Computational Intelligence. 2020;36:1320–1347.
The errors are in Section 3.2 Model formulation, Equations (1), (2), (4), and (7). These errors are critical, especially in the objective model (1). It appeared that the index was mixed with the decision variables, so it made the whole Equation (1) wrong.
The online version of this article has been corrected accordingly.
We apologize for this error.
Sangsawang O, Chanta S. 洪水救援配送网络的有容量单一分配枢纽定位模型。计算智能。误差出现在第 3.2 节的模型表述、公式 (1)、(2)、(4) 和 (7)。这些误差至关重要,尤其是在目标模型(1)中。错误的等式:k 是指数,不应该出现在求和之间,上面的 k 应该是大写的 K,O 前面的 X 应该是希腊字母。Minimize∑i=1Ik∑k=1kCik(m,t)Xik(XOi+δDi)+∑i=1I∑k=1K∑l=1LαCkl(m,t)Ykli+∑k=1KFkXkk$$ mathit{operatorname{Minimize}} 和 limits_{k=1}^Ik sum limits_{k=1}^k{C}_{ik}^{left(m、tright)}{X}_{ik}left({XO}_i+delta {D}_iright)+sum limits_{i=1}^Isum limits_{k=1}^Ksum limits_{l=1}^Lalpha {C}_{kl}^{left(m,tright)}{Y}_{kl}^i+sum limits_{k=1}^K{F}_k{X}_{kk}$$(1)Should be:最小化∑i=1I∑k=1KCik(m,t)Xik(χOi+δDi)+∑i=1I∑k=1K∑l=1LαCkl(m,t)Ykli+∑k=1KFkXk$$ mathrm{Minimise}sum limits_{i=1}^Isum limits_{k=1}^K{C}_{ik}^{left(m、t/right)}{X}_{ik}(left(chi {O}_i+delta {D}_iright)+sum limits_{i=1}^Isum limits_{k=1}^Ksum limits_{l=1}^Lalpha {C}_{kl}^{left(m,tright)}{Y}_{kl}^i+sum limits_{k=1}^K{F}_{k{X}_{kk}$$(1)错误的方程:∑k=1kXik=1∀i$$ sum limits_{k=1}^k{X}_{ik}=1kern0.5em forall i $$(2)Should be:∑k=1KXik=1∀i$ $sum limits_{k=1}^K{X}_{ik}=1kern0.5em forall i $$(2)The wrong equation:求和中的 j 应该是大写的 J,第二项的下标 kl 应该是 lk。∑l=1LYkli-∑l=1LYkli=OiXik-∑j=1jWijXjk∀i,k$$ sum limits_{l=1}^L{Y}_{kl}^i-sum limits_{l=1}^L{Y}_{kl}^i={O}_i{X}_{ik}-sum limits_{j=1}^j{W}_{ij}{X}_{jk}kern0.5em forall i,k $$(4)Should be:∑l=1LYkli−∑l=1LYlki=OiXik−∑j=1JWijXjk∀i,k$$ sum limits_{l=1}^L{Y}_{kl}^i-sum limits_{l=1}^L{Y}_{lk}^i={O}_i{X}_{ik}-sum limits_{j=1}^J{W}_{ij}{X}_{jk}kern0.5em forall i,k $$(4)The wrong equation:J = 1 应为小 j = 1,下标 ik 应为 jk。∑J=1Jtik(m,t)Xjk≤Td∀k$$ sum limits_{J=1}^J{t}_{ik}^{left(m,tright)}{X}_{jk}le {T}_dkern0.5em forall k $$(7)Should be:∑j=1Jtjk(m,t)Xjk≤Td∀k$$ sum limits_{j=1}^J{t}_{jk}^{left(m,tright)}{X}_{jk}le {T}_dkern0.5em forall k $$(7)The online version of this article has been corrected accordingly.We apologize for this error.
{"title":"Correction to Capacitated single-allocation hub location model for a flood relief distribution network","authors":"","doi":"10.1111/coin.12614","DOIUrl":"10.1111/coin.12614","url":null,"abstract":"<p>Sangsawang O, Chanta S. Capacitated single-allocation hub location model for a flood relief distribution network. <i>Computational Intelligence</i>. 2020;36:1320–1347.</p><p>The errors are in Section 3.2 Model formulation, Equations (1), (2), (4), and (7). These errors are critical, especially in the objective model (1). It appeared that the index was mixed with the decision variables, so it made the whole Equation (1) wrong.</p><p>The online version of this article has been corrected accordingly.</p><p>We apologize for this error.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.12614","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139553148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parvathaneni Naga Srinivasu, Muhammad Fazal Ijaz, Marcin Woźniak
Agriculture serves as the predominant driver of a country's economy, constituting the largest share of the nation's manpower. Most farmers are facing a problem in choosing the most appropriate crop that can yield better based on the environmental conditions and make profits for them. As a consequence of this, there will be a notable decline in their overall productivity. Precision agriculture has effectively resolved the issues encountered by farmers. Today's farmers may benefit from what's known as precision agriculture. This method takes into account local climate, soil type, and past crop yields to determine which varieties will provide the best results. The explainable artificial intelligence (XAI) technique is used with radial basis functions neural network and spider monkey optimization to classify suitable crops based on the underlying soil and environmental conditions. The XAI technology would provide assets in better transparency of the prediction model on deciding the most suitable crops for their farms, taking into account a variety of geographical and operational criteria. The proposed model is assessed using standard metrics like precision, recall, accuracy, and F1-score. In contrast to other cutting-edge approaches discussed in this study, the model has shown fair performance with approximately 12% better accuracy than the other models considered in the current study. Similarly, precision has improvised by 10%, recall by 11%, and F1-score by 10%.
农业是一个国家经济的主要驱动力,占国家人力的最大份额。大多数农民都面临着一个问题,那就是如何根据环境条件选择最合适的作物,既能提高产量,又能为他们带来利润。因此,他们的整体生产率会明显下降。精准农业有效地解决了农民遇到的问题。如今的农民可以从所谓的精准农业中受益。这种方法会考虑当地的气候、土壤类型和以往的作物产量,以确定哪些品种能带来最佳效果。可解释人工智能(XAI)技术与径向基函数神经网络和蜘蛛猴优化相结合,根据土壤和环境条件对合适的作物进行分类。考虑到各种地理和操作标准,XAI 技术将为资产提供透明度更高的预测模型,以决定最适合其农场的作物。建议的模型使用精确度、召回率、准确度和 F1 分数等标准指标进行评估。与本研究中讨论的其他前沿方法相比,该模型表现尚可,准确率比本研究中考虑的其他模型高出约 12%。同样,精确度提高了 10%,召回率提高了 11%,F1 分数提高了 10%。
{"title":"XAI-driven model for crop recommender system for use in precision agriculture","authors":"Parvathaneni Naga Srinivasu, Muhammad Fazal Ijaz, Marcin Woźniak","doi":"10.1111/coin.12629","DOIUrl":"10.1111/coin.12629","url":null,"abstract":"<p>Agriculture serves as the predominant driver of a country's economy, constituting the largest share of the nation's manpower. Most farmers are facing a problem in choosing the most appropriate crop that can yield better based on the environmental conditions and make profits for them. As a consequence of this, there will be a notable decline in their overall productivity. Precision agriculture has effectively resolved the issues encountered by farmers. Today's farmers may benefit from what's known as precision agriculture. This method takes into account local climate, soil type, and past crop yields to determine which varieties will provide the best results. The explainable artificial intelligence (XAI) technique is used with radial basis functions neural network and spider monkey optimization to classify suitable crops based on the underlying soil and environmental conditions. The XAI technology would provide assets in better transparency of the prediction model on deciding the most suitable crops for their farms, taking into account a variety of geographical and operational criteria. The proposed model is assessed using standard metrics like precision, recall, accuracy, and F1-score. In contrast to other cutting-edge approaches discussed in this study, the model has shown fair performance with approximately 12% better accuracy than the other models considered in the current study. Similarly, precision has improvised by 10%, recall by 11%, and F1-score by 10%.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139483901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, manual event assignment for Chinese mayor's hotline is still a problem of low efficiency. In this paper, we propose a computer-aided event assignment method based on hierarchical features and enhanced association. First, hierarchical features of hotline events are extracted to obtain event encoding vectors. Second, the fine-tuned RoBERTa2RoBERTa model is used to encode the “sanding” responsibility texts of Chinese local departments. Third, an association enhanced attention (AEA) mechanism is proposed to capture the correlation information of the “event-sanding” splicing vectors for the sake of obtaining matching results of “event-sanding,” and the matching results are input into the classifier. Finally, the assignment department for is obtained by a department selection module. Experimental results show that our method can achieve better performance compared with several baseline methods on HEAD (a dataset we construct independently). The ablation experiments also demonstrate the validity of each key module in our method.
{"title":"Event assigning based on hierarchical features and enhanced association for Chinese mayor's hotline","authors":"Gang Chen, Xiaomin Cheng, Jianpeng Chen, Xiangrong She, JiaQi Qin, Jian Chen","doi":"10.1111/coin.12626","DOIUrl":"10.1111/coin.12626","url":null,"abstract":"<p>Nowadays, manual event assignment for Chinese mayor's hotline is still a problem of low efficiency. In this paper, we propose a computer-aided event assignment method based on hierarchical features and enhanced association. First, hierarchical features of hotline events are extracted to obtain event encoding vectors. Second, the fine-tuned RoBERTa2RoBERTa model is used to encode the “sanding” responsibility texts of Chinese local departments. Third, an association enhanced attention (AEA) mechanism is proposed to capture the correlation information of the “event-sanding” splicing vectors for the sake of obtaining matching results of “event-sanding,” and the matching results are input into the classifier. Finally, the assignment department for is obtained by a department selection module. Experimental results show that our method can achieve better performance compared with several baseline methods on HEAD (a dataset we construct independently). The ablation experiments also demonstrate the validity of each key module in our method.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139385806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}