首页 > 最新文献

Applied Intelligence最新文献

英文 中文
Skeleton-based human action recognition using LSTM and depthwise separable convolutional neural network 基于LSTM和深度可分离卷积神经网络的骨骼人体动作识别
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-11 DOI: 10.1007/s10489-024-06082-w
Hoangcong Le, Cheng-Kai Lu, Chen-Chien Hsu, Shao-Kang Huang

In the field of computer vision, the task of human action recognition (HAR) represents a challenge, due to the complexity of capturing nuanced human movements from video data. To address this issue, researchers have developed various algorithms. In this study, a novel two-stream architecture is developed that combines LSTM with a depthwise separable convolutional neural network (DSConV) and skeleton information, with the aim of enhancing the accuracy of HAR. The 3D coordinates of each joint in the skeleton are extracted using the Mediapipe library, and the 2D coordinates are obtained using MoveNet. The proposed method comprises two streams, called the temporal LSTM module and the joint-motion module, and was developed to overcome the limitations of prior two-stream RNN models, such as the vanishing gradient problem and the difficulty of effectively extracting temporal-spatial information. A performance evaluation on the benchmark datasets of JHMDB (73.31%), Florence-3D Action (97.67%), SBU Interaction (95.2%), and Penn Action (94.0%) showcases the effectiveness of the proposed model. A comparison with state-of-the-art methods demonstrates the superior performance of the approach on these datasets. This study contributes to advancing the field of HAR, with potential applications in surveillance and robotics.

Graphical abstract

在计算机视觉领域,由于从视频数据中捕获细微的人类动作的复杂性,人类动作识别(HAR)的任务是一个挑战。为了解决这个问题,研究人员开发了各种算法。本文提出了一种新的双流结构,将LSTM与深度可分离卷积神经网络(DSConV)和骨架信息相结合,以提高HAR的准确性。使用Mediapipe库提取骨架中每个关节的三维坐标,使用MoveNet获得二维坐标。该方法包括时域LSTM模块和关节运动模块两个流,克服了现有两流RNN模型的局限性,如梯度消失问题和难以有效提取时空信息。在JHMDB(73.31%)、Florence-3D Action(97.67%)、SBU Interaction(95.2%)和Penn Action(94.0%)的基准数据集上进行了性能评估,结果表明了该模型的有效性。与最先进的方法进行比较,证明了该方法在这些数据集上的优越性能。该研究有助于推进HAR领域的发展,在监控和机器人领域具有潜在的应用前景。图形抽象
{"title":"Skeleton-based human action recognition using LSTM and depthwise separable convolutional neural network","authors":"Hoangcong Le,&nbsp;Cheng-Kai Lu,&nbsp;Chen-Chien Hsu,&nbsp;Shao-Kang Huang","doi":"10.1007/s10489-024-06082-w","DOIUrl":"10.1007/s10489-024-06082-w","url":null,"abstract":"<div><p>In the field of computer vision, the task of human action recognition (HAR) represents a challenge, due to the complexity of capturing nuanced human movements from video data. To address this issue, researchers have developed various algorithms. In this study, a novel two-stream architecture is developed that combines LSTM with a depthwise separable convolutional neural network (DSConV) and skeleton information, with the aim of enhancing the accuracy of HAR. The 3D coordinates of each joint in the skeleton are extracted using the Mediapipe library, and the 2D coordinates are obtained using MoveNet. The proposed method comprises two streams, called the temporal LSTM module and the joint-motion module, and was developed to overcome the limitations of prior two-stream RNN models, such as the vanishing gradient problem and the difficulty of effectively extracting temporal-spatial information. A performance evaluation on the benchmark datasets of JHMDB (73.31%), Florence-3D Action (97.67%), SBU Interaction (95.2%), and Penn Action (94.0%) showcases the effectiveness of the proposed model. A comparison with state-of-the-art methods demonstrates the superior performance of the approach on these datasets. This study contributes to advancing the field of HAR, with potential applications in surveillance and robotics.</p><h3>Graphical abstract</h3>\u0000<div><figure><div><div><picture><source><img></source></picture></div></div></figure></div></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142963119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A chaotic variant of the Golden Jackal Optimizer and its application for medical image segmentation 金豺优化器的混沌变体及其在医学图像分割中的应用
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-11 DOI: 10.1007/s10489-024-06084-8
Amir Hamza, Morad Grimes, Abdelkarim Boukabou, Badis Lekouaghet, Diego Oliva, Samira Dib, Yacine Himeur

The initial segmentation phase is crucial in image processing to simplify the image representation and extract some desired features. Different methods and techniques have been proposed for image multi-level thresholding, but they are still stuck in local optima and need improvement. Recently, a metaheuristic optimization algorithm called Golden Jackal Optimizer (GJO) has been proposed as an alternative solution. The GJO has been adopted as a good solution for many optimization problems. However, the GJO attempted to solve the convergence problem to a local minimum during execution, often leading to unsatisfactory results. Most variants of GJO are based on chaotic systems due to their easy implementation and remarkable capacity to avoid being trapped in local optima. This paper proposes a Polynomial Chebychev Symmetric Chaotic-based GJO (PCSCGJO) algorithm by combining a recently developed chaotic generating function to achieve better segmentation results. This variant improves the GJO by introducing the chaotic generating function of the Chebyshev polynomials as an update process while searching for the optimal solution. Simulation results prove the effectiveness of the PCSCGJO method and its ability to deal with different medical color images. The quality of the segmented images obtained by the proposed method was compared to well-known metaheuristic algorithms using performance metrics such as PSNR, SSIM, FSIM, and MSE. Consequently, the metrics values show that the suggested technique outperforms the other methods regarding quality and accuracy.

在图像处理过程中,初始分割阶段是简化图像表示和提取所需特征的关键。针对图像多层次阈值分割问题,人们提出了不同的方法和技术,但都停留在局部最优状态,有待改进。最近,一种称为Golden Jackal Optimizer (GJO)的元启发式优化算法被提出作为一种替代解决方案。GJO已被用作许多优化问题的一个很好的解决方案。然而,GJO在执行过程中试图将收敛问题解决到局部最小值,常常导致不满意的结果。GJO的大多数变体都是基于混沌系统的,因为它们易于实现,并且具有显著的避免陷入局部最优的能力。本文提出了一种基于多项式Chebychev对称混沌的GJO (PCSCGJO)算法,该算法结合了最近发展的混沌生成函数,以获得更好的分割效果。该变体通过引入Chebyshev多项式的混沌生成函数作为搜索最优解的更新过程来改进GJO。仿真结果证明了PCSCGJO方法的有效性和处理不同医学彩色图像的能力。通过使用PSNR、SSIM、FSIM和MSE等性能指标,将该方法获得的分割图像质量与知名的元启发式算法进行了比较。因此,度量值表明所建议的技术在质量和准确性方面优于其他方法。
{"title":"A chaotic variant of the Golden Jackal Optimizer and its application for medical image segmentation","authors":"Amir Hamza,&nbsp;Morad Grimes,&nbsp;Abdelkarim Boukabou,&nbsp;Badis Lekouaghet,&nbsp;Diego Oliva,&nbsp;Samira Dib,&nbsp;Yacine Himeur","doi":"10.1007/s10489-024-06084-8","DOIUrl":"10.1007/s10489-024-06084-8","url":null,"abstract":"<div><p>The initial segmentation phase is crucial in image processing to simplify the image representation and extract some desired features. Different methods and techniques have been proposed for image multi-level thresholding, but they are still stuck in local optima and need improvement. Recently, a metaheuristic optimization algorithm called Golden Jackal Optimizer (GJO) has been proposed as an alternative solution. The GJO has been adopted as a good solution for many optimization problems. However, the GJO attempted to solve the convergence problem to a local minimum during execution, often leading to unsatisfactory results. Most variants of GJO are based on chaotic systems due to their easy implementation and remarkable capacity to avoid being trapped in local optima. This paper proposes a Polynomial Chebychev Symmetric Chaotic-based GJO (PCSCGJO) algorithm by combining a recently developed chaotic generating function to achieve better segmentation results. This variant improves the GJO by introducing the chaotic generating function of the Chebyshev polynomials as an update process while searching for the optimal solution. Simulation results prove the effectiveness of the PCSCGJO method and its ability to deal with different medical color images. The quality of the segmented images obtained by the proposed method was compared to well-known metaheuristic algorithms using performance metrics such as PSNR, SSIM, FSIM, and MSE. Consequently, the metrics values show that the suggested technique outperforms the other methods regarding quality and accuracy.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142941022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-level adaptive feature representation based on task augmentation for Cross-Domain Few-Shot learning 基于任务增强的多层次自适应特征表示用于跨域少镜头学习
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1007/s10489-024-06110-9
Ling Yue, Lin Feng, Qiuping Shuai, Zihao Li, Lingxiao Xu

Cross-Domain Few-Shot Learning (CDFSL) is one of the most cutting-edge fields in machine learning. It not only addresses the traditional few-shot problem but also allows for different distributions between base classes and novel classes. However, most current CDFSL models only focus on the generalization performance of high-level features during training and testing, which hinders their ability to generalize well to domains with significant gaps. To overcome this problem, we propose a CDFSL method based on Task Augmentation and Multi-Level Adaptive features representation(TA-MLA). At the feature representation level, we introduce a meta-learning strategy for multi-level features and adaptive features. The former come from different layers of network. They jointly participate in image prediction to fully explore transferable features suitable for cross-domain scenarios. The latter is based on a feature adaptation module of feed-forward attention, aiming to learn domain-adaptive features to improve the generalization of the model. At the training task level, we employ a plug-and-play Task Augmentation(TA) module to generate challenging tasks with adaptive inductive biases, thereby expanding the distribution of the source domain and further bridging domain gaps. Extensive experiments conducted on multiple datasets. The results demonstrate that our method based on meta-learning can effectively improves few-shot classification performance, especially in cases with significant domain shift.

跨域少镜头学习(Cross-Domain Few-Shot Learning, CDFSL)是机器学习领域的一个前沿领域。它不仅解决了传统的few-shot问题,而且还允许基类和新类之间的不同分布。然而,目前大多数CDFSL模型在训练和测试过程中只关注高级特征的泛化性能,这阻碍了它们对具有显著差距的领域的泛化能力。为了克服这一问题,我们提出了一种基于任务增强和多层次自适应特征表示(TA-MLA)的CDFSL方法。在特征表示层面,我们引入了针对多层次特征和自适应特征的元学习策略。前者来自不同的网络层。他们共同参与图像预测,以充分探索适合跨域场景的可转移特征。后者基于前馈注意的特征自适应模块,旨在学习领域自适应特征,提高模型的泛化能力。在训练任务层面,我们采用即插即用任务增强(TA)模块来生成具有自适应归纳偏差的挑战性任务,从而扩展源域的分布并进一步弥合域差距。在多个数据集上进行了广泛的实验。结果表明,基于元学习的分类方法可以有效地提高分类性能,特别是在有明显域漂移的情况下。
{"title":"Multi-level adaptive feature representation based on task augmentation for Cross-Domain Few-Shot learning","authors":"Ling Yue,&nbsp;Lin Feng,&nbsp;Qiuping Shuai,&nbsp;Zihao Li,&nbsp;Lingxiao Xu","doi":"10.1007/s10489-024-06110-9","DOIUrl":"10.1007/s10489-024-06110-9","url":null,"abstract":"<div><p>Cross-Domain Few-Shot Learning (CDFSL) is one of the most cutting-edge fields in machine learning. It not only addresses the traditional few-shot problem but also allows for different distributions between base classes and novel classes. However, most current CDFSL models only focus on the generalization performance of high-level features during training and testing, which hinders their ability to generalize well to domains with significant gaps. To overcome this problem, we propose a CDFSL method based on Task Augmentation and Multi-Level Adaptive features representation(TA-MLA). At the feature representation level, we introduce a meta-learning strategy for multi-level features and adaptive features. The former come from different layers of network. They jointly participate in image prediction to fully explore transferable features suitable for cross-domain scenarios. The latter is based on a feature adaptation module of feed-forward attention, aiming to learn domain-adaptive features to improve the generalization of the model. At the training task level, we employ a plug-and-play Task Augmentation(TA) module to generate challenging tasks with adaptive inductive biases, thereby expanding the distribution of the source domain and further bridging domain gaps. Extensive experiments conducted on multiple datasets. The results demonstrate that our method based on meta-learning can effectively improves few-shot classification performance, especially in cases with significant domain shift.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effectiveness of encoder-decoder deep learning approach for colorectal polyp segmentation in colonoscopy images 编码器-解码器深度学习方法在结肠镜图像中结肠息肉分割中的有效性
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1007/s10489-024-06167-6
Ameer Hamza, Muhammad Bilal, Muhammad Ramzan, Nadia Malik

Colorectal cancer is considered one of the deadliest diseases, contributing to an alarming increase in annual deaths worldwide, with colorectal polyps recognized as precursors to this malignancy. Early and accurate detection of these polyps is crucial for reducing the mortality rate of colorectal cancer. However, the manual detection of polyps is a time-consuming process and requires the expertise of trained medical professionals. Moreover, it often misses polyps due to their varied size, color, and texture. Computer-aided diagnosis systems offer potential improvements, but they often struggle with precision in complex visual environments. This study presents an enhanced deep learning approach using encoder-decoder architecture for colorectal polyp segmentation to capture and utilize complex feature representations. Our approach introduces an enhanced dual attention mechanism, combining spatial and channel-wise attention to focus precisely on critical features. Channel-wise attention, implemented via an optimized Squeeze-and-Excitation (S&E) block, allows the network to capture comprehensive contextual information and interrelationships among different channels, ensuring a more refined feature selection process. The experimental results showed that the proposed model achieved a mean Intersection over Union (IoU) of 0.9054 and 0.9277, a dice coefficient of 0.9006 and 0.9128, a precision of 0.8985 and 0.9517, a recall of 0.9190 and 0.9094, and an accuracy of 0.9806 and 0.9907 on the Kvasir-SEG and CVC-ClinicDB datasets, respectively. Moreover, the proposed model outperforms the existing state-of-the-art resulting in improved patient outcomes with the potential to enhance the early detection of colorectal polyps.

结直肠癌被认为是最致命的疾病之一,导致全球每年死亡人数的惊人增长,结直肠癌息肉被认为是这种恶性肿瘤的前兆。早期准确发现这些息肉对于降低结直肠癌的死亡率至关重要。然而,人工检测息肉是一个耗时的过程,需要训练有素的医疗专业人员的专业知识。此外,由于息肉的大小、颜色和质地各异,它经常会遗漏息肉。计算机辅助诊断系统提供了潜在的改进,但它们在复杂的视觉环境中往往难以达到精度。本研究提出了一种增强的深度学习方法,使用编码器-解码器架构用于结肠直肠息肉分割,以捕获和利用复杂的特征表示。我们的方法引入了一种增强的双重注意机制,将空间和渠道的注意结合起来,精确地关注关键特征。通过优化的挤压和激励(S&;E)块实现的通道智能关注,允许网络捕获不同通道之间的综合上下文信息和相互关系,确保更精细的特征选择过程。实验结果表明,该模型在Kvasir-SEG和CVC-ClinicDB数据集上的平均IoU分别为0.9054和0.9277,骰子系数分别为0.9006和0.9128,精度分别为0.8985和0.9517,召回率分别为0.9190和0.9094,准确率分别为0.9806和0.9907。此外,所提出的模型优于现有的最先进的技术,从而改善了患者的预后,有可能提高结肠直肠息肉的早期发现。
{"title":"Effectiveness of encoder-decoder deep learning approach for colorectal polyp segmentation in colonoscopy images","authors":"Ameer Hamza,&nbsp;Muhammad Bilal,&nbsp;Muhammad Ramzan,&nbsp;Nadia Malik","doi":"10.1007/s10489-024-06167-6","DOIUrl":"10.1007/s10489-024-06167-6","url":null,"abstract":"<div><p>Colorectal cancer is considered one of the deadliest diseases, contributing to an alarming increase in annual deaths worldwide, with colorectal polyps recognized as precursors to this malignancy. Early and accurate detection of these polyps is crucial for reducing the mortality rate of colorectal cancer. However, the manual detection of polyps is a time-consuming process and requires the expertise of trained medical professionals. Moreover, it often misses polyps due to their varied size, color, and texture. Computer-aided diagnosis systems offer potential improvements, but they often struggle with precision in complex visual environments. This study presents an enhanced deep learning approach using encoder-decoder architecture for colorectal polyp segmentation to capture and utilize complex feature representations. Our approach introduces an enhanced dual attention mechanism, combining spatial and channel-wise attention to focus precisely on critical features. Channel-wise attention, implemented via an optimized Squeeze-and-Excitation (S&amp;E) block, allows the network to capture comprehensive contextual information and interrelationships among different channels, ensuring a more refined feature selection process. The experimental results showed that the proposed model achieved a mean Intersection over Union (IoU) of 0.9054 and 0.9277, a dice coefficient of 0.9006 and 0.9128, a precision of 0.8985 and 0.9517, a recall of 0.9190 and 0.9094, and an accuracy of 0.9806 and 0.9907 on the Kvasir-SEG and CVC-ClinicDB datasets, respectively. Moreover, the proposed model outperforms the existing state-of-the-art resulting in improved patient outcomes with the potential to enhance the early detection of colorectal polyps.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-way conflict analysis based on multi-scale situation tables 基于多尺度情景表的三向冲突分析
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1007/s10489-024-06188-1
Chuan-Yuan Lu, Hai-Long Yang, Zhi-Lian Guo

In the existing three-way conflict analysis models, ratings have only one scale. However, when evaluating an issue in real life, agents can also use multi-scale ratings, which can provide a more comprehensive description and better analysis. Therefore, it is necessary to study three-way conflict analysis based on a multi-scale situation table. In this paper, we consider the construction of three-way conflict analysis models on multi-scale situation tables (MS-STs). Firstly, we introduce the concept of MS-STs, in which the attitudes of agents towards issues are represented by multi-scale ratings. Secondly, we construct two types of three-way conflict analysis models on MS-STs using two different methods. One approach is to directly construct a three-way conflict analysis model on original MS-STs, called Type-1 three-way conflict analysis model. In this approach, we measure the conflict distances between agents on a subset of issues by using the proposed weighted distance function. We then trisect all pairs of agents. The other method involves converting an original MS-ST into a single-scale situation table through optimal scale selection. This results in a single-scale situation table induced by the optimal scale combination. Based on this, we construct a corresponding Type-2 three-way conflict analysis model. We provide several examples to illustrate the construction process of these two models. Additionally, we provide the calculation methods for weights and thresholds. Finally, we compare the proposed models in this paper with existing models to verify their applicability and effectiveness.

在现有的三方冲突分析模型中,评分只有一个量表。然而,在评估现实生活中的问题时,智能体也可以使用多尺度评分,这可以提供更全面的描述和更好的分析。因此,有必要研究基于多尺度情景表的三方冲突分析。在本文中,我们考虑在多尺度情境表(MS-STs)上构建三向冲突分析模型。首先,我们引入了MS-STs的概念,其中主体对问题的态度用多尺度评分来表示。其次,我们使用两种不同的方法构建了两类MS-STs的三方冲突分析模型。一种方法是直接在原MS-STs上构建一个三向冲突分析模型,称为Type-1三向冲突分析模型。在这种方法中,我们通过使用提出的加权距离函数来度量agent之间在一个问题子集上的冲突距离。然后,我们对所有的agent对进行三等分。另一种方法是通过最优尺度选择将原始MS-ST转换为单尺度情景表。这就得到了由最优尺度组合引起的单尺度情景表。在此基础上,我们构建了相应的2型三方冲突分析模型。我们提供了几个例子来说明这两个模型的构建过程。此外,我们还提供了权重和阈值的计算方法。最后,将本文提出的模型与现有模型进行了比较,验证了其适用性和有效性。
{"title":"Three-way conflict analysis based on multi-scale situation tables","authors":"Chuan-Yuan Lu,&nbsp;Hai-Long Yang,&nbsp;Zhi-Lian Guo","doi":"10.1007/s10489-024-06188-1","DOIUrl":"10.1007/s10489-024-06188-1","url":null,"abstract":"<div><p>In the existing three-way conflict analysis models, ratings have only one scale. However, when evaluating an issue in real life, agents can also use multi-scale ratings, which can provide a more comprehensive description and better analysis. Therefore, it is necessary to study three-way conflict analysis based on a multi-scale situation table. In this paper, we consider the construction of three-way conflict analysis models on multi-scale situation tables (MS-STs). Firstly, we introduce the concept of MS-STs, in which the attitudes of agents towards issues are represented by multi-scale ratings. Secondly, we construct two types of three-way conflict analysis models on MS-STs using two different methods. One approach is to directly construct a three-way conflict analysis model on original MS-STs, called Type-1 three-way conflict analysis model. In this approach, we measure the conflict distances between agents on a subset of issues by using the proposed weighted distance function. We then trisect all pairs of agents. The other method involves converting an original MS-ST into a single-scale situation table through optimal scale selection. This results in a single-scale situation table induced by the optimal scale combination. Based on this, we construct a corresponding Type-2 three-way conflict analysis model. We provide several examples to illustrate the construction process of these two models. Additionally, we provide the calculation methods for weights and thresholds. Finally, we compare the proposed models in this paper with existing models to verify their applicability and effectiveness.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142941008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient PSO-based evolutionary model for closed high-utility itemset mining 一种高效的基于pso的封闭式高效用项集挖掘进化模型
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1007/s10489-024-06151-0
Simen Carstensen, Jerry Chun-Wei Lin

High-utility itemset mining (HUIM) is a widely adopted data mining technique for discovering valuable patterns in transactional databases. Although HUIM can provide useful knowledge in various types of data, it can be challenging to interpret the results when many patterns are found. To alleviate this, closed high-utility itemset mining (CHUIM) has been suggested, which provides users with a more concise and meaningful set of solutions. However, CHUIM is a computationally demanding task, and current approaches can require prolonged runtimes. This paper aims to solve this problem and proposes a meta-heuristic model based on particle swarm optimization (PSO) to discover CHUIs, called CHUI-PSO. Moreover, the algorithm incorporates several new strategies to reduce the computational cost associated with similar existing techniques. First, we introduce Extended TWU pruning (ETP), which aims to decrease the number of possible candidates to improve the discovery of solutions in large search spaces. Second, we propose two new utility upper bounds, used to estimate itemset utilities and bypass expensive candidate evaluations. Finally, to increase population diversity and prevent redundant computations, we suggest a structure called ExploredSet to maintain and utilize the evaluated candidates. Extensive experimental results show that CHUI-PSO outperforms the current state-of-the-art algorithms regarding execution time, accuracy, and convergence.

高效用项集挖掘(HUIM)是一种广泛采用的数据挖掘技术,用于在事务性数据库中发现有价值的模式。尽管HUIM可以在各种类型的数据中提供有用的知识,但是当发现许多模式时,解释结果可能具有挑战性。为了缓解这一问题,封闭型高效用项集挖掘(CHUIM)被提出,它为用户提供了一组更简洁、更有意义的解决方案。然而,CHUIM是一项计算要求很高的任务,目前的方法可能需要较长的运行时间。针对这一问题,本文提出了一种基于粒子群优化(PSO)的元启发式chui发现模型,称为CHUI-PSO。此外,该算法结合了几种新的策略,以减少与类似现有技术相关的计算成本。首先,我们引入了扩展TWU剪枝(ETP),其目的是减少可能的候选者数量,以提高在大型搜索空间中解决方案的发现。其次,我们提出了两个新的效用上限,用于估计项目集效用并绕过昂贵的候选评估。最后,为了增加种群多样性和防止冗余计算,我们建议使用一个名为exploreset的结构来维护和利用评估的候选物种。大量的实验结果表明,CHUI-PSO在执行时间、精度和收敛性方面优于当前最先进的算法。
{"title":"An efficient PSO-based evolutionary model for closed high-utility itemset mining","authors":"Simen Carstensen,&nbsp;Jerry Chun-Wei Lin","doi":"10.1007/s10489-024-06151-0","DOIUrl":"10.1007/s10489-024-06151-0","url":null,"abstract":"<div><p>High-utility itemset mining (HUIM) is a widely adopted data mining technique for discovering valuable patterns in transactional databases. Although HUIM can provide useful knowledge in various types of data, it can be challenging to interpret the results when many patterns are found. To alleviate this, closed high-utility itemset mining (CHUIM) has been suggested, which provides users with a more concise and meaningful set of solutions. However, CHUIM is a computationally demanding task, and current approaches can require prolonged runtimes. This paper aims to solve this problem and proposes a meta-heuristic model based on particle swarm optimization (PSO) to discover CHUIs, called CHUI-PSO. Moreover, the algorithm incorporates several new strategies to reduce the computational cost associated with similar existing techniques. First, we introduce Extended TWU pruning (ETP), which aims to decrease the number of possible candidates to improve the discovery of solutions in large search spaces. Second, we propose two new utility upper bounds, used to estimate itemset utilities and bypass expensive candidate evaluations. Finally, to increase population diversity and prevent redundant computations, we suggest a structure called ExploredSet to maintain and utilize the evaluated candidates. Extensive experimental results show that CHUI-PSO outperforms the current state-of-the-art algorithms regarding execution time, accuracy, and convergence.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage cyberbullying detection based on multi-view features and decision fusion strategy 基于多视角特征和决策融合策略的两阶段网络欺凌检测
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1007/s10489-024-06049-x
Tingting Li, Ziming Zeng, Shouqiang Sun

Cyberbullying has emerged as a pressing concern across various social platforms due to the escalating usage of online networks. Cyberbullying may lead victims to depression, self-harm, and even suicide. In this research, a two-stage cyberbullying detection framework based on multi-view features and decision fusion strategies is proposed. The first stage is to discover cyberbullying texts in social media, and the second stage delves into categorizing the specific forms of bullying present in the identified texts. In the two-stage detection process, features are constructed from multiple views, including Content view, Profanity view, and User view, to portray the bullying behavior. Furthermore, a decision fusion strategy is designed, incorporating both single-view features and multi-view features to enhance detection effectiveness. Finally, the research explains the complex mechanism of multi-view features in two-stage cyberbullying detection by calculating their SHAP values. The experimental results demonstrate the effectiveness of the multi-view feature and decision fusion strategy in cyberbullying detection. Notably, this framework yields impressive results, boasting an F1-score of 89.66% and an AUC of 95.98% in Stage I, while achieving an F1-score of 74.25% and an Accuracy of 79.01% in Stage II. The interpretability analysis of features affirms the pivotal role played by multi-view features, with the Content view features emerging as especially significant in the pursuit of effective cyberbullying detection.

由于在线网络的使用不断增加,网络欺凌已经成为各种社交平台上的一个紧迫问题。网络欺凌可能导致受害者抑郁、自残,甚至自杀。本文提出了一种基于多视角特征和决策融合策略的两阶段网络欺凌检测框架。第一阶段是发现社交媒体中的网络欺凌文本,第二阶段是对识别文本中存在的具体欺凌形式进行分类。在两阶段的检测过程中,从多个视图(Content视图、Profanity视图和User视图)构建特征来描绘欺凌行为。在此基础上,设计了单视图特征和多视图特征相结合的决策融合策略,提高了检测效率。最后,通过计算多视角特征的SHAP值,解释了多视角特征在两阶段网络欺凌检测中的复杂机制。实验结果验证了多视角特征和决策融合策略在网络欺凌检测中的有效性。值得注意的是,该框架产生了令人印象深刻的结果,第一阶段的f1得分为89.66%,AUC为95.98%,而第二阶段的f1得分为74.25%,准确率为79.01%。特征的可解释性分析肯定了多视角特征的关键作用,其中内容视角特征在追求有效的网络欺凌检测中显得尤为重要。
{"title":"A two-stage cyberbullying detection based on multi-view features and decision fusion strategy","authors":"Tingting Li,&nbsp;Ziming Zeng,&nbsp;Shouqiang Sun","doi":"10.1007/s10489-024-06049-x","DOIUrl":"10.1007/s10489-024-06049-x","url":null,"abstract":"<div><p>Cyberbullying has emerged as a pressing concern across various social platforms due to the escalating usage of online networks. Cyberbullying may lead victims to depression, self-harm, and even suicide. In this research, a two-stage cyberbullying detection framework based on multi-view features and decision fusion strategies is proposed. The first stage is to discover cyberbullying texts in social media, and the second stage delves into categorizing the specific forms of bullying present in the identified texts. In the two-stage detection process, features are constructed from multiple views, including Content view, Profanity view, and User view, to portray the bullying behavior. Furthermore, a decision fusion strategy is designed, incorporating both single-view features and multi-view features to enhance detection effectiveness. Finally, the research explains the complex mechanism of multi-view features in two-stage cyberbullying detection by calculating their SHAP values. The experimental results demonstrate the effectiveness of the multi-view feature and decision fusion strategy in cyberbullying detection. Notably, this framework yields impressive results, boasting an F1-score of 89.66% and an AUC of 95.98% in Stage I, while achieving an F1-score of 74.25% and an Accuracy of 79.01% in Stage II. The interpretability analysis of features affirms the pivotal role played by multi-view features, with the Content view features emerging as especially significant in the pursuit of effective cyberbullying detection. </p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142941006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDet3D: embracing 3D object detector with diffusion DDet3D:拥抱3D物体检测器与扩散
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-09 DOI: 10.1007/s10489-024-06045-1
Gopi Krishna Erabati, Helder Araujo

Existing approaches rely on heuristic or learnable object proposals (which are required to be optimised during training) for 3D object detection. In our approach, we replace the hand-crafted or learnable object proposals with randomly generated object proposals by formulating a new paradigm to employ a diffusion model to detect 3D objects from a set of randomly generated and supervised learning-based object proposals in an autonomous driving application. We propose DDet3D, a diffusion-based 3D object detection framework that formulates 3D object detection as a generative task over the 3D bounding box coordinates in 3D space. To our knowledge, this work is the first to formulate the 3D object detection with denoising diffusion model and to establish that 3D randomly generated and supervised learning-based proposals (different from empirical anchors or learnt queries) are also potential object candidates for 3D object detection. During training, the 3D random noisy boxes are employed from the 3D ground truth boxes by progressively adding Gaussian noise, and the DDet3D network is trained to reverse the diffusion process. During the inference stage, the DDet3D network is able to iteratively refine the 3D randomly generated and supervised learning-based noisy boxes to predict 3D bounding boxes conditioned on the LiDAR Bird’s Eye View (BEV) features. The advantage of DDet3D is that it allows to decouple training and inference stages, thus enabling the use of a larger number of proposal boxes or sampling steps during inference to improve accuracy. We conduct extensive experiments and analysis on the nuScenes and KITTI datasets. DDet3D achieves competitive performance compared to well-designed 3D object detectors. Our work serves as a strong baseline to explore and employ more efficient diffusion models for 3D perception tasks.

现有的方法依赖于启发式或可学习的对象建议(需要在训练期间进行优化)来进行3D对象检测。在我们的方法中,我们通过制定一个新的范例,使用扩散模型来检测自动驾驶应用中随机生成和监督学习的一组基于对象提案中的3D对象,从而用随机生成的或可学习的对象提案取代手工制作或可学习的对象提案。我们提出了DDet3D,这是一个基于扩散的3D物体检测框架,它将3D物体检测作为3D空间中3D边界框坐标上的生成任务。据我们所知,这项工作是第一个用去噪扩散模型制定3D物体检测,并建立3D随机生成和监督学习的建议(不同于经验锚定或学习查询)也是3D物体检测的潜在对象候选人。在训练过程中,通过逐步加入高斯噪声,从三维地面真值盒中提取三维随机噪声盒,并训练DDet3D网络来逆转扩散过程。在推理阶段,DDet3D网络能够迭代地细化3D随机生成和基于监督学习的噪声盒,以预测激光雷达鸟瞰(BEV)特征为条件的3D边界盒。DDet3D的优点是它允许将训练和推理阶段解耦,从而允许在推理期间使用更多的建议框或采样步骤来提高准确性。我们对nuScenes和KITTI数据集进行了广泛的实验和分析。与设计良好的3D目标探测器相比,DDet3D实现了具有竞争力的性能。我们的工作为探索和采用更有效的3D感知任务扩散模型提供了强有力的基础。
{"title":"DDet3D: embracing 3D object detector with diffusion","authors":"Gopi Krishna Erabati,&nbsp;Helder Araujo","doi":"10.1007/s10489-024-06045-1","DOIUrl":"10.1007/s10489-024-06045-1","url":null,"abstract":"<div><p>Existing approaches rely on heuristic or learnable object proposals (which are required to be optimised during training) for 3D object detection. In our approach, we replace the hand-crafted or learnable object proposals with randomly generated object proposals by formulating a new paradigm to employ a diffusion model to detect 3D objects from a set of randomly generated and supervised learning-based object proposals in an autonomous driving application. We propose DDet3D, a diffusion-based 3D object detection framework that formulates 3D object detection as a generative task over the 3D bounding box coordinates in 3D space. To our knowledge, this work is the first to formulate the 3D object detection with denoising diffusion model and to establish that 3D randomly generated and supervised learning-based proposals (different from empirical anchors or learnt queries) are also potential object candidates for 3D object detection. During training, the 3D random noisy boxes are employed from the 3D ground truth boxes by progressively adding Gaussian noise, and the DDet3D network is trained to reverse the diffusion process. During the inference stage, the DDet3D network is able to iteratively refine the 3D randomly generated and supervised learning-based noisy boxes to predict 3D bounding boxes conditioned on the LiDAR Bird’s Eye View (BEV) features. The advantage of DDet3D is that it allows to decouple training and inference stages, thus enabling the use of a larger number of proposal boxes or sampling steps during inference to improve accuracy. We conduct extensive experiments and analysis on the nuScenes and KITTI datasets. DDet3D achieves competitive performance compared to well-designed 3D object detectors. Our work serves as a strong baseline to explore and employ more efficient diffusion models for 3D perception tasks.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new multivariate decomposition-ensemble approach with denoised neighborhood rough set for stock price forecasting over time-series information system 基于去噪邻域粗糙集的多变量分解集成方法用于时间序列信息系统股票价格预测
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-09 DOI: 10.1007/s10489-024-06070-0
Juncheng Bai, Bingzhen Sun, Yuqi Guo, Xiaoli Chu

The uncertainty of the stock market is the foundation for investors to obtain returns. Driven by interests, stock price forecasting has become a research hotspot. However, as the high latitude, highly volatile, and noisy, forecasting the stock prices has become a highly challenging task. The existing stock price forecasting methods only study low latitude data, which is unable to reflect the cumulative effect of multiple factors on stock price. To effectively address the high latitude, high volatility, and noise of stock price, a time-series information system (TSIS) forecasting approach for stock price is proposed. Aiming at dynamically depicting the real-world decision-making scenarios from a finer granularity, the TSIS is constructed based on the information systems. Then, a denoised neighborhood rough set (DNRS) model based on the TSIS is proposed by local density factor to achieve the purpose of feature selection, which can weaken the impact of noise on sample data. Subsequently, the multivariate empirical mode decomposition (MEMD) and multivariate kernel extreme learning machine (MKELM) are employed to decompose and forecast. Finally, the proposed TSIS forecasting approach is applied to stock price. Experimental results show that the TSIS forecasting approach for stock price has excellent performance and can be provided in the quantitative trading of stock market.

股票市场的不确定性是投资者获得收益的基础。在利益的驱动下,股价预测已成为一个研究热点。然而,由于股票市场纬度高、波动大、噪声大,股票价格预测成为一项极具挑战性的任务。现有的股价预测方法只研究低纬度数据,无法反映多因素对股价的累积效应。为了有效地解决股票价格的高纬度、高波动性和噪声问题,提出了一种时间序列信息系统(TSIS)股票价格预测方法。为了从更细的粒度上动态地描述现实世界的决策场景,基于信息系统构建了决策决策系统。然后,利用局部密度因子提出了一种基于TSIS的去噪邻域粗糙集(DNRS)模型,达到特征选择的目的,可以减弱噪声对样本数据的影响;随后,采用多元经验模态分解(MEMD)和多元核极值学习机(MKELM)进行分解和预测。最后,将提出的TSIS预测方法应用于股票价格。实验结果表明,TSIS方法对股票价格的预测具有优异的性能,可以为股票市场的量化交易提供参考。
{"title":"A new multivariate decomposition-ensemble approach with denoised neighborhood rough set for stock price forecasting over time-series information system","authors":"Juncheng Bai,&nbsp;Bingzhen Sun,&nbsp;Yuqi Guo,&nbsp;Xiaoli Chu","doi":"10.1007/s10489-024-06070-0","DOIUrl":"10.1007/s10489-024-06070-0","url":null,"abstract":"<div><p>The uncertainty of the stock market is the foundation for investors to obtain returns. Driven by interests, stock price forecasting has become a research hotspot. However, as the high latitude, highly volatile, and noisy, forecasting the stock prices has become a highly challenging task. The existing stock price forecasting methods only study low latitude data, which is unable to reflect the cumulative effect of multiple factors on stock price. To effectively address the high latitude, high volatility, and noise of stock price, a time-series information system (TSIS) forecasting approach for stock price is proposed. Aiming at dynamically depicting the real-world decision-making scenarios from a finer granularity, the TSIS is constructed based on the information systems. Then, a denoised neighborhood rough set (DNRS) model based on the TSIS is proposed by local density factor to achieve the purpose of feature selection, which can weaken the impact of noise on sample data. Subsequently, the multivariate empirical mode decomposition (MEMD) and multivariate kernel extreme learning machine (MKELM) are employed to decompose and forecast. Finally, the proposed TSIS forecasting approach is applied to stock price. Experimental results show that the TSIS forecasting approach for stock price has excellent performance and can be provided in the quantitative trading of stock market.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of switching-input LSTM network for vessel trajectory prediction 切换输入LSTM网络在船舶轨迹预测中的应用
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-09 DOI: 10.1007/s10489-024-06079-5
Weihong Wang, Zuo Yi, Licheng Zhao, Peng Jia, Haibo Kuang

Due to the rapid economic development of modern society, the demand for cargo in the shipping industry has experienced unprecedented growth in recent years. The introduction of a large number of ships, especially large, new, and intelligent ships, has made shipping networks more complex. Controlling transportation risks has become more challenging than ever before. Ship trajectory prediction based on automatic identification system (AIS) data can effectively help identify abnormal ship behaviors and reduce maritime risks such as collisions, grounding, and contacts. In recent years, with the rapid development of deep learning theories, recurrent neural network models (long short-term memory and gated recurrent unit) have been widely used in ship trajectory prediction due to their powerful ability to capture hidden information in time-series data. However, these models struggle with tasks involving high complexity of trajectory features. To address this issue, this paper introduces a switching-input mechanism based on LSTM, constructing a ship trajectory prediction model based on the SI-LSTM model. The switching-input mechanism enables the model to adjust its processing of important information according to dynamic changes in input data, effectively capturing local features of complex trajectories. The experimental section, which includes eight cases of complex trajectories, demonstrates the competitive generalization ability and prediction accuracy of SI-LSTM.

由于现代社会经济的快速发展,近年来航运业对货物的需求出现了前所未有的增长。大量船舶的引入,特别是大型、新型和智能船舶的引入,使航运网络更加复杂。控制运输风险比以往任何时候都更具挑战性。基于自动识别系统(AIS)数据的船舶轨迹预测可以有效地识别船舶异常行为,降低碰撞、搁浅、接触等海上风险。近年来,随着深度学习理论的快速发展,递归神经网络模型(长短期记忆和门控递归单元)因其捕获时间序列数据中隐藏信息的能力而被广泛应用于船舶轨迹预测。然而,这些模型很难处理涉及高复杂性轨迹特征的任务。针对这一问题,本文引入了一种基于LSTM的切换输入机制,构建了基于SI-LSTM模型的船舶轨迹预测模型。切换输入机制使模型能够根据输入数据的动态变化调整对重要信息的处理,有效捕获复杂轨迹的局部特征。实验部分包括8个复杂轨迹,验证了SI-LSTM具有竞争力的泛化能力和预测精度。
{"title":"Application of switching-input LSTM network for vessel trajectory prediction","authors":"Weihong Wang,&nbsp;Zuo Yi,&nbsp;Licheng Zhao,&nbsp;Peng Jia,&nbsp;Haibo Kuang","doi":"10.1007/s10489-024-06079-5","DOIUrl":"10.1007/s10489-024-06079-5","url":null,"abstract":"<div><p>Due to the rapid economic development of modern society, the demand for cargo in the shipping industry has experienced unprecedented growth in recent years. The introduction of a large number of ships, especially large, new, and intelligent ships, has made shipping networks more complex. Controlling transportation risks has become more challenging than ever before. Ship trajectory prediction based on automatic identification system (AIS) data can effectively help identify abnormal ship behaviors and reduce maritime risks such as collisions, grounding, and contacts. In recent years, with the rapid development of deep learning theories, recurrent neural network models (long short-term memory and gated recurrent unit) have been widely used in ship trajectory prediction due to their powerful ability to capture hidden information in time-series data. However, these models struggle with tasks involving high complexity of trajectory features. To address this issue, this paper introduces a switching-input mechanism based on LSTM, constructing a ship trajectory prediction model based on the SI-LSTM model. The switching-input mechanism enables the model to adjust its processing of important information according to dynamic changes in input data, effectively capturing local features of complex trajectories. The experimental section, which includes eight cases of complex trajectories, demonstrates the competitive generalization ability and prediction accuracy of SI-LSTM.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1