首页 > 最新文献

Knowledge-Based Systems最新文献

英文 中文
Robust block tensor PCA with F-norm projection framework 采用 F 规范投影框架的稳健块张量 PCA
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1016/j.knosys.2024.112712
Xiaomin Zhang, Xiaofeng Wang, Zhenzhong Liu, Jianen Chen
Tensor principal component analysis (TPCA), also known as Tucker decomposition, ensures that the extracted “core tensor” maximizes the variance of the sample projections. Nevertheless, this method is particularly susceptible to noise and outliers. This is due to the utilization of the squared F-norm as the distance metric. In addition, it lacks constraints on the discrepancies between the original tensors and the projected tensors. To address these issues, a novel tensor-based trigonometric projection framework is proposed using F-norm to measure projection distances. Tensor data are first processed utilizing a blocking recombination technique prior to projection, thus enhancing the representation of the data at a local spatio-temporal level. Then, we present a block TPCA with the F-norm metric (BTPCA-F) and develop an iterative greedy algorithm for solving BTPCA-F. Subsequently, regarding the F-norm projection relation as the “Pythagorean Theorem”, we provide three different objective functions, namely, the tangent, cosine and sine models. These three functions directly or indirectly achieve the two objectives of maximizing projection distances and minimizing reconstruction errors. The corresponding tangent, cosine and sine solution algorithms based on BTPCA-F (called tan-BTPCA-F, cos-BTPCA-F and sin-BTPCA-F) are presented to optimize the objective functions, respectively. The convergence and rotation invariance of these algorithms are rigorously proved theoretically and discussed in detail. Lastly, extensive experimental results illustrate that the proposed methods significantly outperform the existing TPCA and the related 2DPCA algorithms.
张量主成分分析法(TPCA),又称塔克分解法,可确保提取的 "核心张量 "使样本投影的方差最大化。然而,这种方法特别容易受到噪声和异常值的影响。这是由于使用了平方 F 正态作为距离度量。此外,它对原始张量和投影张量之间的差异缺乏约束。为了解决这些问题,我们提出了一种新颖的基于张量的三角投影框架,使用 F-norm 来测量投影距离。在投影之前,首先利用分块重组技术对张量数据进行处理,从而增强数据在局部时空层面的代表性。然后,我们提出了一种使用 F 规范度量的分块 TPCA(BTPCA-F),并开发了一种用于求解 BTPCA-F 的迭代贪婪算法。随后,我们将 F 规范投影关系视为 "勾股定理",提供了三种不同的目标函数,即正切、余弦和正弦模型。这三个函数直接或间接地实现了投影距离最大化和重建误差最小化这两个目标。为了优化目标函数,我们提出了基于 BTPCA-F 的相应正切、余弦和正弦求解算法(称为 tan-BTPCA-F、cos-BTPCA-F 和 sin-BTPCA-F)。对这些算法的收敛性和旋转不变性进行了严格的理论证明和详细讨论。最后,大量实验结果表明,所提出的方法明显优于现有的 TPCA 算法和相关的 2DPCA 算法。
{"title":"Robust block tensor PCA with F-norm projection framework","authors":"Xiaomin Zhang,&nbsp;Xiaofeng Wang,&nbsp;Zhenzhong Liu,&nbsp;Jianen Chen","doi":"10.1016/j.knosys.2024.112712","DOIUrl":"10.1016/j.knosys.2024.112712","url":null,"abstract":"<div><div>Tensor principal component analysis (TPCA), also known as Tucker decomposition, ensures that the extracted “core tensor” maximizes the variance of the sample projections. Nevertheless, this method is particularly susceptible to noise and outliers. This is due to the utilization of the squared <em>F</em>-norm as the distance metric. In addition, it lacks constraints on the discrepancies between the original tensors and the projected tensors. To address these issues, a novel tensor-based trigonometric projection framework is proposed using <em>F</em>-norm to measure projection distances. Tensor data are first processed utilizing a blocking recombination technique prior to projection, thus enhancing the representation of the data at a local spatio-temporal level. Then, we present a block TPCA with the <em>F</em>-norm metric (BTPCA-F) and develop an iterative greedy algorithm for solving BTPCA-F. Subsequently, regarding the <em>F</em>-norm projection relation as the “Pythagorean Theorem”, we provide three different objective functions, namely, the tangent, cosine and sine models. These three functions directly or indirectly achieve the two objectives of maximizing projection distances and minimizing reconstruction errors. The corresponding tangent, cosine and sine solution algorithms based on BTPCA-F (called <em>tan</em>-BTPCA-F, <em>cos</em>-BTPCA-F and <em>sin</em>-BTPCA-F) are presented to optimize the objective functions, respectively. The convergence and rotation invariance of these algorithms are rigorously proved theoretically and discussed in detail. Lastly, extensive experimental results illustrate that the proposed methods significantly outperform the existing TPCA and the related 2DPCA algorithms.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112712"},"PeriodicalIF":7.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel automated labelling algorithm for deep learning-based built-up areas extraction using nighttime lighting data 利用夜间照明数据,基于深度学习提取建筑密集区的新型自动标注算法
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1016/j.knosys.2024.112702
Baoling Gui, Anshuman Bhardwaj, Lydia Sam
The use of remote sensing imagery and cutting-edge deep learning techniques can produce impressive results when it comes to built-up areas extraction (BUAE). However, reducing the manual labelling set production process while ensuring high accuracy is currently the main research topic. This study pioneers the exploitation of nighttime lighting data (NLD) for automatically generating deep learning label sets, assessing the feasibility, and identifying limitations of using varied intensity ranges of lighting data directly for this purpose. We provide a novel method for generating fine-grained labels through an optimisation technique that eliminates the necessity for human involvement. This approach employs deep learning segmentation algorithms and has been tested in eight cities across seven countries. The results indicate that segmentation performs well in most cities, with the combination of iso clustering and NLD allowing for more precise extraction of urban building districts. The overall accuracy exceeds 90% in most cities. The results based on manual and historical data (∼0.7) as labels are significantly lower than those based on NLD. At the same time, the segmentation effect of deep learning has a more significant advantage over traditional machine learning classification algorithms (∼0.8). DeeplabV3 and U-Net exhibit different strengths in segmentation and extraction: DeeplabV3 has a stronger ability to eliminate errors, while U-Net retains the capability to handle less labelled information, making them mutually advantageous depending on the specific requirements of the task. It proposes a strategy to automatically extract built-up areas with minimal human involvement.
在建筑密集区提取(BUAE)方面,遥感图像和尖端深度学习技术的使用可以产生令人印象深刻的结果。然而,如何在确保高精度的同时减少人工标注集的制作过程是目前的主要研究课题。本研究开创性地利用夜间照明数据(NLD)自动生成深度学习标签集,评估直接使用不同强度范围的照明数据的可行性,并找出其局限性。我们提供了一种通过优化技术生成细粒度标签的新方法,无需人工参与。这种方法采用了深度学习分割算法,并在七个国家的八个城市进行了测试。结果表明,在大多数城市中,细分效果都很好,结合等值聚类和 NLD 可以更精确地提取城市建筑区。大多数城市的总体准确率超过 90%。基于人工和历史数据(∼0.7)作为标签的结果明显低于基于 NLD 的结果。同时,与传统的机器学习分类算法相比,深度学习的分割效果具有更显著的优势(∼0.8)。DeeplabV3 和 U-Net 在分割和提取方面表现出不同的优势:DeeplabV3 消除错误的能力更强,而 U-Net 则保留了处理较少标记信息的能力,根据任务的具体要求,两者互为优势。它提出了一种自动提取建筑密集区的策略,只需极少的人工参与。
{"title":"A novel automated labelling algorithm for deep learning-based built-up areas extraction using nighttime lighting data","authors":"Baoling Gui,&nbsp;Anshuman Bhardwaj,&nbsp;Lydia Sam","doi":"10.1016/j.knosys.2024.112702","DOIUrl":"10.1016/j.knosys.2024.112702","url":null,"abstract":"<div><div>The use of remote sensing imagery and cutting-edge deep learning techniques can produce impressive results when it comes to built-up areas extraction (BUAE). However, reducing the manual labelling set production process while ensuring high accuracy is currently the main research topic. This study pioneers the exploitation of nighttime lighting data (NLD) for automatically generating deep learning label sets, assessing the feasibility, and identifying limitations of using varied intensity ranges of lighting data directly for this purpose. We provide a novel method for generating fine-grained labels through an optimisation technique that eliminates the necessity for human involvement. This approach employs deep learning segmentation algorithms and has been tested in eight cities across seven countries. The results indicate that segmentation performs well in most cities, with the combination of iso clustering and NLD allowing for more precise extraction of urban building districts. The overall accuracy exceeds 90% in most cities. The results based on manual and historical data (∼0.7) as labels are significantly lower than those based on NLD. At the same time, the segmentation effect of deep learning has a more significant advantage over traditional machine learning classification algorithms (∼0.8). DeeplabV3 and U-Net exhibit different strengths in segmentation and extraction: DeeplabV3 has a stronger ability to eliminate errors, while U-Net retains the capability to handle less labelled information, making them mutually advantageous depending on the specific requirements of the task. It proposes a strategy to automatically extract built-up areas with minimal human involvement.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112702"},"PeriodicalIF":7.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoQuo: An Adaptive plan optimizer with reinforcement learning for query plan selection AutoQuo:利用强化学习选择查询计划的自适应计划优化器
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1016/j.knosys.2024.112664
Xiaoqiao Xiong , Jiong Yu , Zhenzhen He
An efficient execution plan generation is crucial for optimizing database queries. In exploring large table spaces to identify the most optimal table join orders, traditional cost-based optimizers may encounter challenges when confronted with complicated queries. Thus, learning-based optimizers have recently been proposed to leverage past experience and generate high-quality execution plans. However, these optimizers demonstrate limited generalization capabilities for workloads with diverse distributions.
In this study, an adaptive plan selector based on reinforcement learning is proposed to address these issues. However, three challenges remain: (1) How to generate optimal multi-table join orders? We adopt an exploration–exploitation strategy to traverse the vast search space composed of candidate tables, thereby evaluating the significance of each table. Long short-term memory (LSTM) networks are subsequently used to predict the performance of join orders and generate high-quality candidate plans. (2) How to automatically learn new features in novel datasets? We employ the Actor–Critic strategy, which involves jointly cross-training the policy and value networks. By adjusting the parameters based on real feedback obtained from the database, the new datasets are automatically learnt. (3) How to automatically select the best plan? We introduce a constraint-aware optimal plan selection model that captures the relationship between constraints and plans. This model guides the selection of the best plan under constraints of execution time, cardinality, cost, and mean-squared error (MSE). The experimental results on real datasets demonstrated the superiority of the proposed approach over state-of-the-art baselines. Compared with PostgreSQL, we observed a reduction of 29.73% in total latency and 28.36% in tail latency.
高效的执行计划生成对于优化数据库查询至关重要。在探索大型表空间以确定最优表连接顺序时,传统的基于成本的优化器在面对复杂查询时可能会遇到挑战。因此,最近有人提出了基于学习的优化器,以利用过去的经验生成高质量的执行计划。本研究提出了一种基于强化学习的自适应计划选择器来解决这些问题。然而,仍然存在三个挑战:(1) 如何生成最优的多表连接顺序?我们采用探索-开发策略来遍历由候选表组成的巨大搜索空间,从而评估每个表的重要性。随后,利用长短期记忆(LSTM)网络预测连接顺序的性能,并生成高质量的候选计划。(2) 如何在新数据集中自动学习新特征?我们采用了 "行为批判"(Actor-Critic)策略,即联合交叉训练策略网络和价值网络。根据从数据库中获得的真实反馈来调整参数,从而自动学习新的数据集。(3) 如何自动选择最佳计划?我们引入了一个约束感知最优计划选择模型,该模型捕捉了约束和计划之间的关系。该模型可在执行时间、卡数、成本和均方误差(MSE)等约束条件下指导选择最佳计划。在真实数据集上的实验结果表明,所提出的方法优于最先进的基线方法。与 PostgreSQL 相比,我们观察到总延迟时间缩短了 29.73%,尾部延迟时间缩短了 28.36%。
{"title":"AutoQuo: An Adaptive plan optimizer with reinforcement learning for query plan selection","authors":"Xiaoqiao Xiong ,&nbsp;Jiong Yu ,&nbsp;Zhenzhen He","doi":"10.1016/j.knosys.2024.112664","DOIUrl":"10.1016/j.knosys.2024.112664","url":null,"abstract":"<div><div>An efficient execution plan generation is crucial for optimizing database queries. In exploring large table spaces to identify the most optimal table join orders, traditional cost-based optimizers may encounter challenges when confronted with complicated queries. Thus, learning-based optimizers have recently been proposed to leverage past experience and generate high-quality execution plans. However, these optimizers demonstrate limited generalization capabilities for workloads with diverse distributions.</div><div>In this study, an adaptive plan selector based on reinforcement learning is proposed to address these issues. However, three challenges remain: (1) How to generate optimal multi-table join orders? We adopt an exploration–exploitation strategy to traverse the vast search space composed of candidate tables, thereby evaluating the significance of each table. Long short-term memory (LSTM) networks are subsequently used to predict the performance of join orders and generate high-quality candidate plans. (2) How to automatically learn new features in novel datasets? We employ the Actor–Critic strategy, which involves jointly cross-training the policy and value networks. By adjusting the parameters based on real feedback obtained from the database, the new datasets are automatically learnt. (3) How to automatically select the best plan? We introduce a constraint-aware optimal plan selection model that captures the relationship between constraints and plans. This model guides the selection of the best plan under constraints of execution time, cardinality, cost, and mean-squared error (MSE). The experimental results on real datasets demonstrated the superiority of the proposed approach over state-of-the-art baselines. Compared with PostgreSQL, we observed a reduction of 29.73% in total latency and 28.36% in tail latency.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112664"},"PeriodicalIF":7.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSSTGNN: Multi-scaled Spatio-temporal graph neural networks for short- and long-term traffic prediction MSSTGNN:用于短期和长期交通预测的多尺度时空图神经网络
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1016/j.knosys.2024.112716
Yuanhai Qu, Xingli Jia, Junheng Guo, Haoran Zhu, Wenbin Wu
Accurate traffic prediction plays a crucial role in ensuring traffic safety and minimizing property damage. The utilization of STGNNs in traffic prediction has gained significant attention from researchers aiming to capture the intricate time-varying relationships within traffic data. While existing STGNNs commonly rely on Euclidean distance to assess the similarity between nodes, which may fall short in reflecting POI or regional functions. The traffic network exhibits static from a macro perspective, whereas undergoes dynamic changes in the micro perspective. Previous researchers incorporating self-attention to capture time-varying features for constructing dynamic graphs have faced challenges in overlooking the connections between nodes due to the Softmax polarization effect, which tends to amplify extreme value differences, and fails to accurately represent the true relationships between nodes. To solve this problem, we introduce the Multi-Scaled Spatio-Temporal Graph Neural Networks (MSSTGNN), which aims to comprehensively capture characteristics within traffic from multiscale viewpoints to construct multi-perspective graphs. We employ a trainable matrix to enhance the predefined adjacency matrices and to construct an optimal dynamic graph based on both trend and period. Additionally, a graph aggregate technique is proposed to effectively merge trend and periodic dynamic graphs. TCN is developed to model nonstationary traffic data, and we leverage the skip and residual connections to increase the model depth. A two-stage learning approach and a novel MSELoss function is designed to enhance the model's performance. The experimental results demonstrate that the MSSTGNN model outperforms the existing methods, achieving state-of-the-art performances across multiple real-world datasets.
准确的交通预测在确保交通安全和减少财产损失方面发挥着至关重要的作用。STGNNs 在交通预测中的应用受到了研究人员的极大关注,其目的是捕捉交通数据中错综复杂的时变关系。现有的 STGNN 通常依靠欧氏距离来评估节点之间的相似性,但这可能无法反映 POI 或区域功能。交通网络从宏观角度看是静态的,而从微观角度看则是动态变化的。以往的研究人员在构建动态图时采用自注意捕捉时变特征,但由于 Softmax 极化效应容易放大极值差异,导致忽略节点之间的联系,无法准确反映节点之间的真实关系。为解决这一问题,我们引入了多尺度时空图神经网络(MSSTGNN),旨在从多尺度视角全面捕捉交通内部特征,构建多视角图。我们采用可训练矩阵来增强预定义邻接矩阵,并根据趋势和周期构建最佳动态图。此外,我们还提出了一种图形聚合技术,以有效合并趋势和周期动态图。我们开发了 TCN 来为非平稳交通数据建模,并利用跳接和残差连接来增加模型深度。我们设计了一种两阶段学习方法和一个新颖的 MSELoss 函数来提高模型的性能。实验结果表明,MSSTGNN 模型优于现有方法,在多个实际数据集上实现了最先进的性能。
{"title":"MSSTGNN: Multi-scaled Spatio-temporal graph neural networks for short- and long-term traffic prediction","authors":"Yuanhai Qu,&nbsp;Xingli Jia,&nbsp;Junheng Guo,&nbsp;Haoran Zhu,&nbsp;Wenbin Wu","doi":"10.1016/j.knosys.2024.112716","DOIUrl":"10.1016/j.knosys.2024.112716","url":null,"abstract":"<div><div>Accurate traffic prediction plays a crucial role in ensuring traffic safety and minimizing property damage. The utilization of STGNNs in traffic prediction has gained significant attention from researchers aiming to capture the intricate time-varying relationships within traffic data. While existing STGNNs commonly rely on Euclidean distance to assess the similarity between nodes, which may fall short in reflecting POI or regional functions. The traffic network exhibits static from a macro perspective, whereas undergoes dynamic changes in the micro perspective. Previous researchers incorporating self-attention to capture time-varying features for constructing dynamic graphs have faced challenges in overlooking the connections between nodes due to the <em>Softmax</em> polarization effect, which tends to amplify extreme value differences, and fails to accurately represent the true relationships between nodes. To solve this problem, we introduce the <strong>M</strong>ulti-<strong>S</strong>caled <strong>S</strong>patio-<strong>T</strong>emporal <strong>G</strong>raph <strong>N</strong>eural <strong>N</strong>etworks (MSSTGNN), which aims to comprehensively capture characteristics within traffic from multiscale viewpoints to construct multi-perspective graphs. We employ a trainable matrix to enhance the predefined adjacency matrices and to construct an optimal dynamic graph based on both trend and period. Additionally, a graph aggregate technique is proposed to effectively merge trend and periodic dynamic graphs. TCN is developed to model nonstationary traffic data, and we leverage the skip and residual connections to increase the model depth. A two-stage learning approach and a novel MSELoss function is designed to enhance the model's performance. The experimental results demonstrate that the MSSTGNN model outperforms the existing methods, achieving state-of-the-art performances across multiple real-world datasets.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112716"},"PeriodicalIF":7.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative association networks with cross-level attention for session-based recommendation 基于会话推荐的跨级别关注协作关联网络
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-05 DOI: 10.1016/j.knosys.2024.112693
Tingting Dai , Qiao Liu , Yue Zeng , Yang Xie , Xujiang Liu , Haoran Hu , Xu Luo
Session-based recommendation aims to predict the next interacted item based on the anonymous user’s behavior sequence. The main challenge lies in how to perceive user preference within limited interactions. Recent advances demonstrate the advantage of utilizing intent represented by combining consecutive items in understanding complex user behavior. However, these methods concentrate on the diverse expression of intents enriched by considering consecutive items with different lengths, ignoring the exploration of complex transitions between intents. This limitation makes intent transfer unclear in the user behavior with dynamic change, resulting in sub-optimal performance. To solve this problem, we propose novel collaborative association networks with cross-level attention for session-based recommendation (denoted as CAN4Rec), which simultaneously models intra- and inter-level transitions within hierarchical user intents. Specifically, we first construct two levels of intent, including individual-level and aggregated-level intent, and each level of intent is obtained based on sequential transitions. Then, the cross-level attention mechanism is designed to extract inter-transitions between different levels of intent. The captured inter-transitions are bi-directional, containing from individual-level to aggregated-level intents and from aggregated-level to individual-level intents. Finally, we generate directional session representations and combine them to realize the prediction of the next item. Experimental results on three public benchmark datasets demonstrate that the proposed model achieves state-of-the-art performance.
基于会话的推荐旨在根据匿名用户的行为序列预测下一个互动项目。主要挑战在于如何在有限的交互中感知用户偏好。最近的进展表明,在理解复杂的用户行为时,利用组合连续项目所代表的意图具有优势。然而,这些方法集中于通过考虑不同长度的连续项目来丰富意图的多样化表达,而忽略了对意图之间复杂转换的探索。这种局限性使得用户行为在动态变化中的意图传递不清晰,从而导致性能达不到最优。为解决这一问题,我们提出了基于会话推荐的跨层级关注的新型协作关联网络(简称为 CAN4Rec),该网络可同时对分层用户意图的层内和层间转换进行建模。具体来说,我们首先构建了两个层次的意图,包括个体层次的意图和聚合层次的意图,每个层次的意图都是基于顺序转换得到的。然后,设计跨层级关注机制来提取不同层级意图之间的相互转换。捕捉到的相互转换是双向的,包含从个人层面到聚合层面的意图,以及从聚合层面到个人层面的意图。最后,我们生成定向会话表示,并将它们结合起来,实现对下一个项目的预测。在三个公共基准数据集上的实验结果表明,所提出的模型达到了最先进的性能。
{"title":"Collaborative association networks with cross-level attention for session-based recommendation","authors":"Tingting Dai ,&nbsp;Qiao Liu ,&nbsp;Yue Zeng ,&nbsp;Yang Xie ,&nbsp;Xujiang Liu ,&nbsp;Haoran Hu ,&nbsp;Xu Luo","doi":"10.1016/j.knosys.2024.112693","DOIUrl":"10.1016/j.knosys.2024.112693","url":null,"abstract":"<div><div>Session-based recommendation aims to predict the next interacted item based on the anonymous user’s behavior sequence. The main challenge lies in how to perceive user preference within limited interactions. Recent advances demonstrate the advantage of utilizing intent represented by combining consecutive items in understanding complex user behavior. However, these methods concentrate on the diverse expression of intents enriched by considering consecutive items with different lengths, ignoring the exploration of complex transitions between intents. This limitation makes intent transfer unclear in the user behavior with dynamic change, resulting in sub-optimal performance. To solve this problem, we propose novel collaborative association networks with cross-level attention for session-based recommendation (denoted as CAN4Rec), which simultaneously models intra- and inter-level transitions within hierarchical user intents. Specifically, we first construct two levels of intent, including individual-level and aggregated-level intent, and each level of intent is obtained based on sequential transitions. Then, the cross-level attention mechanism is designed to extract inter-transitions between different levels of intent. The captured inter-transitions are bi-directional, containing from individual-level to aggregated-level intents and from aggregated-level to individual-level intents. Finally, we generate directional session representations and combine them to realize the prediction of the next item. Experimental results on three public benchmark datasets demonstrate that the proposed model achieves state-of-the-art performance.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112693"},"PeriodicalIF":7.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CIDGMed: Causal Inference-Driven Medication Recommendation with Enhanced Dual-Granularity Learning
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-05 DOI: 10.1016/j.knosys.2024.112685
Shunpan Liang , Xiang Li , Shi Mu , Chen Li , Yu Lei , Yulei Hou , Tengfei Ma
Medication recommendation aims to integrate patients’ long-term health records to provide accurate and safe medication combinations for specific health states. Existing methods often fail to deeply explore the true causal relationships between diseases/procedures and medications, resulting in biased recommendations. Additionally, in medication representation learning, the relationships between information at different granularities of medications—coarse-grained (medication itself) and fine-grained (molecular level)—are not effectively integrated, leading to biases in representation learning. To address these limitations, we propose the Causal Inference-driven Dual-Granularity Medication Recommendation method (CIDGMed). Our approach leverages causal inference to uncover the relationships between diseases/procedures and medications, thereby enhancing the rationality and interpretability of recommendations. By integrating coarse-grained medication effects with fine-grained molecular structure information, CIDGMed provides a comprehensive representation of medications. Additionally, we employ a bias correction model during the prediction phase to further refine recommendations, ensuring both accuracy and safety. Through extensive experiments, CIDGMed significantly outperforms current state-of-the-art models across multiple metrics, achieving a 2.54% increase in accuracy, a 3.65% reduction in side effects, and a 39.42% improvement in time efficiency. Additionally, we demonstrate the rationale of CIDGMed through a case study.
{"title":"CIDGMed: Causal Inference-Driven Medication Recommendation with Enhanced Dual-Granularity Learning","authors":"Shunpan Liang ,&nbsp;Xiang Li ,&nbsp;Shi Mu ,&nbsp;Chen Li ,&nbsp;Yu Lei ,&nbsp;Yulei Hou ,&nbsp;Tengfei Ma","doi":"10.1016/j.knosys.2024.112685","DOIUrl":"10.1016/j.knosys.2024.112685","url":null,"abstract":"<div><div>Medication recommendation aims to integrate patients’ long-term health records to provide accurate and safe medication combinations for specific health states. Existing methods often fail to deeply explore the true causal relationships between diseases/procedures and medications, resulting in biased recommendations. Additionally, in medication representation learning, the relationships between information at different granularities of medications—coarse-grained (medication itself) and fine-grained (molecular level)—are not effectively integrated, leading to biases in representation learning. To address these limitations, we propose the Causal Inference-driven Dual-Granularity Medication Recommendation method (CIDGMed). Our approach leverages causal inference to uncover the relationships between diseases/procedures and medications, thereby enhancing the rationality and interpretability of recommendations. By integrating coarse-grained medication effects with fine-grained molecular structure information, CIDGMed provides a comprehensive representation of medications. Additionally, we employ a bias correction model during the prediction phase to further refine recommendations, ensuring both accuracy and safety. Through extensive experiments, CIDGMed significantly outperforms current state-of-the-art models across multiple metrics, achieving a 2.54% increase in accuracy, a 3.65% reduction in side effects, and a 39.42% improvement in time efficiency. Additionally, we demonstrate the rationale of CIDGMed through a case study.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"309 ","pages":"Article 112685"},"PeriodicalIF":7.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ALDANER: Active Learning based Data Augmentation for Named Entity Recognition ALDANER:基于主动学习的命名实体识别数据增强技术
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.knosys.2024.112682
Vincenzo Moscato, Marco Postiglione, Giancarlo Sperlì, Andrea Vignali
Training Named Entity Recognition (NER) models typically necessitates the use of extensively annotated datasets. This requirement presents a significant challenge due to the labor-intensive and costly nature of manual annotation, especially in specialized domains such as medicine and finance. To address data scarcity, two strategies have emerged as effective: (1) Active Learning (AL), which autonomously identifies samples that would most enhance model performance if annotated, and (2) data augmentation, which automatically generates new samples. However, while AL reduces human effort, it does not eliminate it entirely, and data augmentation often leads to incomplete and noisy annotations, presenting new hurdles in NER model training. In this study, we integrate AL principles into a data augmentation framework, named Active Learning-based Data Augmentation for NER (ALDANER), to prioritize the selection of informative samples from an augmented pool and mitigate the impact of noisy annotations. Our experiments across various benchmark datasets and few-shot scenarios demonstrate that our approach surpasses several data augmentation baselines, offering insights into promising avenues for future research.
训练命名实体识别(NER)模型通常需要使用大量注释数据集。由于人工标注劳动密集且成本高昂,尤其是在医学和金融等专业领域,这一要求带来了巨大的挑战。为了解决数据稀缺的问题,有两种有效的策略:(1) 主动学习(Active Learning,AL),它能自动识别如果注释后最能提高模型性能的样本;(2) 数据增强(data augmentation,自动生成新样本)。然而,虽然主动学习可以减少人工操作,但并不能完全消除人工操作,而且数据扩增往往会导致注释不完整和有噪声,给 NER 模型训练带来新的障碍。在本研究中,我们将 AL 原则整合到数据扩增框架中,命名为基于主动学习的 NER 数据扩增(ALDANER),以便优先从扩增池中选择信息样本,并减轻噪声注释的影响。我们在各种基准数据集和少数几个场景中进行的实验表明,我们的方法超越了几种数据扩增基线,为未来的研究提供了有前途的途径。
{"title":"ALDANER: Active Learning based Data Augmentation for Named Entity Recognition","authors":"Vincenzo Moscato,&nbsp;Marco Postiglione,&nbsp;Giancarlo Sperlì,&nbsp;Andrea Vignali","doi":"10.1016/j.knosys.2024.112682","DOIUrl":"10.1016/j.knosys.2024.112682","url":null,"abstract":"<div><div>Training Named Entity Recognition (NER) models typically necessitates the use of extensively annotated datasets. This requirement presents a significant challenge due to the labor-intensive and costly nature of manual annotation, especially in specialized domains such as medicine and finance. To address data scarcity, two strategies have emerged as effective: (1) Active Learning (AL), which autonomously identifies samples that would most enhance model performance if annotated, and (2) data augmentation, which automatically generates new samples. However, while AL reduces human effort, it does not eliminate it entirely, and data augmentation often leads to incomplete and noisy annotations, presenting new hurdles in NER model training. In this study, we integrate AL principles into a data augmentation framework, named Active Learning-based Data Augmentation for NER (ALDANER), to prioritize the selection of informative samples from an augmented pool and mitigate the impact of noisy annotations. Our experiments across various benchmark datasets and few-shot scenarios demonstrate that our approach surpasses several data augmentation baselines, offering insights into promising avenues for future research.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112682"},"PeriodicalIF":7.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Metric NER: A new paradigm for named entity recognition from a multi-label perspective 局部度量 NER:从多标签角度看命名实体识别的新范式
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.knosys.2024.112686
Zaifeng Hua, Yifei Chen
As the field of Nested Named Entity Recognition (NNER) advances, it is marked by a growing complexity due to the increasing number of multi-label entity instances. How to more effectively identify multi-label entities and explore the correlation between labels is the focus of our work. Unlike previous models that are modeled in single-label multi-classification problems, we propose a novel multi-label local metric NER model to rethink Nested Entity Recognition from a multi-label perspective. Simultaneously, to address the significant sample imbalance problem commonly encountered in multi-label scenarios, we introduce a parts-of-speech-based strategy that significantly improves the model’s performance on imbalanced datasets. Experiments on nested, multi-label, and flat datasets verify the generalization and superiority of our model, with results surpassing the existing state-of-the-art (SOTA) on several multi-label and flat benchmarks. After a series of experimental analyses, we highlight the persistent challenges in the multi-label NER. We are hopeful that the insights derived from our work will not only provide new perspectives on the nested NER landscape but also contribute to the ongoing momentum necessary for advancing research in the field of multi-label NER.
随着嵌套命名实体识别(NNER)领域的发展,由于多标签实体实例的数量不断增加,其复杂性也随之增加。如何更有效地识别多标签实体并探索标签之间的相关性是我们工作的重点。与以往以单标签多分类问题为模型的模型不同,我们提出了一种新颖的多标签局部度量 NER 模型,从多标签的角度重新思考嵌套实体识别。同时,为了解决多标签场景中常见的严重样本不平衡问题,我们引入了基于语音部分的策略,显著提高了模型在不平衡数据集上的性能。在嵌套、多标签和平面数据集上的实验验证了我们模型的通用性和优越性,在多个多标签和平面基准上的结果超过了现有的最先进模型(SOTA)。在一系列实验分析之后,我们强调了多标签 NER 中持续存在的挑战。我们希望,从我们的工作中得出的见解不仅能为嵌套 NER 领域提供新的视角,还能为推动多标签 NER 领域的研究提供必要的持续动力。
{"title":"Local Metric NER: A new paradigm for named entity recognition from a multi-label perspective","authors":"Zaifeng Hua,&nbsp;Yifei Chen","doi":"10.1016/j.knosys.2024.112686","DOIUrl":"10.1016/j.knosys.2024.112686","url":null,"abstract":"<div><div>As the field of Nested Named Entity Recognition (NNER) advances, it is marked by a growing complexity due to the increasing number of multi-label entity instances. How to more effectively identify multi-label entities and explore the correlation between labels is the focus of our work. Unlike previous models that are modeled in single-label multi-classification problems, we propose a novel multi-label local metric NER model to rethink Nested Entity Recognition from a multi-label perspective. Simultaneously, to address the significant sample imbalance problem commonly encountered in multi-label scenarios, we introduce a parts-of-speech-based strategy that significantly improves the model’s performance on imbalanced datasets. Experiments on nested, multi-label, and flat datasets verify the generalization and superiority of our model, with results surpassing the existing state-of-the-art (SOTA) on several multi-label and flat benchmarks. After a series of experimental analyses, we highlight the persistent challenges in the multi-label NER. We are hopeful that the insights derived from our work will not only provide new perspectives on the nested NER landscape but also contribute to the ongoing momentum necessary for advancing research in the field of multi-label NER.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112686"},"PeriodicalIF":7.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRATI: Contrastive representation-based multimodal sound event localization and detection CRATI:基于对比表示的多模态声音事件定位和检测
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.knosys.2024.112692
Shichao Wu , Yongru Wang , Yushan Jiang , Qianyi Zhang , Jingtai Liu
Sound event localization and detection (SELD) refers to classifying sound categories and locating their locations with acoustic models on the same multichannel audio. Recently, SELD has been rapidly evolving by leveraging advanced approaches from other research areas, and the benchmark SELD datasets have become increasingly realistic with simultaneously captured videos provided. Vibration produces sound, we usually associate visual objects with their sound, i.e., we hear footsteps from a walking person, and hear a jangle from one running bell. It comes naturally to think about using multimodal information (image–audio–text vs audio merely), to strengthen sound event detection (SED) accuracies and decrease sound source localization (SSL) errors. In this paper, we propose one contrastive representation-based multimodal acoustic model (CRATI) for SELD, which is designed to learn contrastive audio representations from audio, text, and image in an end-to-end manner. Experiments on the real dataset of STARSS23 and the synthesized dataset of TAU-NIGENS Spatial Sound Events 2021 both show that our CRATI model can learn more effective audio features with additional constraints to minimize the difference among audio and text (SED and SSL annotations in this work). Image input is not conducive to improving SELD performance, as only minor visual changes can be observed from consecutive frames. Compared to the baseline system, our model increases the SED F-score by 11% and decreases the SSL error by 31.02° on the STARSS23 dataset, respectively.
声音事件定位和检测(SELD)是指在同一多通道音频上用声学模型对声音类别进行分类并定位其位置。近来,SELD 利用其他研究领域的先进方法迅速发展,提供的基准 SELD 数据集也越来越逼真,可以同时捕获视频。振动会产生声音,我们通常会将视觉对象与声音联系在一起,例如,我们会听到走路的人发出的脚步声,听到跑步的铃铛发出的叮当声。自然而然地,我们就会想到利用多模态信息(图像-音频-文本与单纯音频)来提高声音事件检测(SED)的准确性,减少声源定位(SSL)误差。在本文中,我们为 SELD 提出了一种基于对比度表示的多模态声学模型(CRATI),该模型旨在以端到端的方式从音频、文本和图像中学习对比度音频表示。在 STARSS23 的真实数据集和 TAU-NIGENS Spatial Sound Events 2021 的合成数据集上进行的实验都表明,我们的 CRATI 模型可以学习到更有效的音频特征,并通过额外的约束条件将音频和文本(本文中为 SED 和 SSL 注释)之间的差异最小化。图像输入不利于提高 SELD 的性能,因为只能从连续帧中观察到微小的视觉变化。与基线系统相比,我们的模型在 STARSS23 数据集上分别将 SED F 分数提高了 11%,将 SSL 误差降低了 31.02°。
{"title":"CRATI: Contrastive representation-based multimodal sound event localization and detection","authors":"Shichao Wu ,&nbsp;Yongru Wang ,&nbsp;Yushan Jiang ,&nbsp;Qianyi Zhang ,&nbsp;Jingtai Liu","doi":"10.1016/j.knosys.2024.112692","DOIUrl":"10.1016/j.knosys.2024.112692","url":null,"abstract":"<div><div>Sound event localization and detection (SELD) refers to classifying sound categories and locating their locations with acoustic models on the same multichannel audio. Recently, SELD has been rapidly evolving by leveraging advanced approaches from other research areas, and the benchmark SELD datasets have become increasingly realistic with simultaneously captured videos provided. Vibration produces sound, we usually associate visual objects with their sound, i.e., we hear footsteps from a walking person, and hear a jangle from one running bell. It comes naturally to think about using multimodal information (image–audio–text vs audio merely), to strengthen sound event detection (SED) accuracies and decrease sound source localization (SSL) errors. In this paper, we propose one contrastive representation-based multimodal acoustic model (CRATI) for SELD, which is designed to learn contrastive audio representations from audio, text, and image in an end-to-end manner. Experiments on the real dataset of STARSS23 and the synthesized dataset of TAU-NIGENS Spatial Sound Events 2021 both show that our CRATI model can learn more effective audio features with additional constraints to minimize the difference among audio and text (SED and SSL annotations in this work). Image input is not conducive to improving SELD performance, as only minor visual changes can be observed from consecutive frames. Compared to the baseline system, our model increases the SED F-score by 11% and decreases the SSL error by 31.02<span><math><mo>°</mo></math></span> on the STARSS23 dataset, respectively.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112692"},"PeriodicalIF":7.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting representation learning of color information: Color medical image segmentation incorporating quaternion 重新审视色彩信息的表征学习:结合四元数的彩色医学图像分割
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-03 DOI: 10.1016/j.knosys.2024.112707
Bicheng Xia , Bangcheng Zhan , Mingkui Shen , Hejun Yang
Currently, color medical image segmentation methods commonly extract color and texture features mixed together by default, however, the distribution of color information and texture information is different: color information is represented differently in different color channels of a color image, while the distribution of texture information remains the same. Such a simple and brute-force feature extraction pattern will inevitably result in a partial bias in the model's semantics understanding. In this paper, we decouple the representation learning for color-texture information, and propose a novel network for color medical image segmentation, named CTNet. Specifically, CTNet introduces the Quaternion CNN (QCNN) module to capture the correlation among different color channels of color medical images to generate color features, and uses a designed local-global texture feature integrator (LoG) to mine the textural features from local to global. Moreover, a multi-stage features interaction strategy is proposed to minimize the semantic understanding gap of color and texture features in CTNet, so that they can be subsequently fused to generate a unified and robust feature representation. Comparative experiments on four different color medical image segmentation benchmark datasets show that CTNet strikes an optimal trade-off between segmentation accuracy and computational overhead when compared to current state-of-the-art methods. We also conduct extensive ablation experiments to verify the effectiveness of the proposed components. Our code will be available at https://github.com/Notmezhan/CTNet.
目前,彩色医学图像分割方法通常默认将颜色和纹理特征混合在一起提取,但颜色信息和纹理信息的分布是不同的:颜色信息在彩色图像的不同颜色通道中表现不同,而纹理信息的分布则保持不变。这种简单粗暴的特征提取模式难免会造成模型语义理解的局部偏差。在本文中,我们将颜色和纹理信息的表示学习解耦,并提出了一种用于彩色医学图像分割的新型网络,命名为 CTNet。具体来说,CTNet 引入了四元数组 CNN(QCNN)模块来捕捉彩色医学图像不同颜色通道之间的相关性以生成颜色特征,并使用设计的局部-全局纹理特征集成器(LoG)来挖掘从局部到全局的纹理特征。此外,还提出了一种多级特征交互策略,以最大限度地缩小 CTNet 中颜色和纹理特征的语义理解差距,从而使它们能够在随后的融合过程中生成统一而稳健的特征表示。在四个不同的彩色医学图像分割基准数据集上进行的对比实验表明,与目前最先进的方法相比,CTNet 在分割准确性和计算开销之间实现了最佳权衡。我们还进行了广泛的消融实验,以验证建议组件的有效性。我们的代码将发布在 https://github.com/Notmezhan/CTNet 网站上。
{"title":"Revisiting representation learning of color information: Color medical image segmentation incorporating quaternion","authors":"Bicheng Xia ,&nbsp;Bangcheng Zhan ,&nbsp;Mingkui Shen ,&nbsp;Hejun Yang","doi":"10.1016/j.knosys.2024.112707","DOIUrl":"10.1016/j.knosys.2024.112707","url":null,"abstract":"<div><div>Currently, color medical image segmentation methods commonly extract color and texture features mixed together by default, however, the distribution of color information and texture information is different: color information is represented differently in different color channels of a color image, while the distribution of texture information remains the same. Such a simple and brute-force feature extraction pattern will inevitably result in a partial bias in the model's semantics understanding. In this paper, we decouple the representation learning for color-texture information, and propose a novel network for color medical image segmentation, named CTNet. Specifically, CTNet introduces the Quaternion CNN (QCNN) module to capture the correlation among different color channels of color medical images to generate color features, and uses a designed local-global texture feature integrator (LoG) to mine the textural features from local to global. Moreover, a multi-stage features interaction strategy is proposed to minimize the semantic understanding gap of color and texture features in CTNet, so that they can be subsequently fused to generate a unified and robust feature representation. Comparative experiments on four different color medical image segmentation benchmark datasets show that CTNet strikes an optimal trade-off between segmentation accuracy and computational overhead when compared to current state-of-the-art methods. We also conduct extensive ablation experiments to verify the effectiveness of the proposed components. Our code will be available at <span><span>https://github.com/Notmezhan/CTNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112707"},"PeriodicalIF":7.2,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge-Based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1