首页 > 最新文献

Expert Systems最新文献

英文 中文
Grading Open-Ended Questions Using LLMs and RAG 使用llm和RAG对开放式问题进行评分
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-28 DOI: 10.1111/exsy.70174
Jacobo Farray Rodríguez, Antonio Jesús Fernández-García, Elena Verdú

Evaluating open-ended questions is a common and time-consuming task in education. With the continuous advances in natural language processing (NLP), large language models (LLMs) trained on massive datasets can assist in this process. This study evaluates the use of LLMs, complemented by retrieval-augmented generation (RAG), for the numerical grading of open-ended answers of approximately 250 words. We focus on two Spanish-language technical courses and assess general-purpose LLMs. Our results show that RAG improves grading accuracy, achieving reductions in mean absolute error (MAE) of up to 19.47% compared to using LLMs alone, with the best configuration reaching a MAE of 1.19. We also note that LLMs tend to assign high grades, reflecting the dataset's imbalance toward higher scores. This work demonstrates the potential of combining RAG with general-purpose LLMs to evaluate specialised Spanish language content, avoiding the cost and bias of model fine-tuning.

在教育中,评估开放式问题是一项常见且耗时的任务。随着自然语言处理(NLP)的不断发展,在海量数据集上训练的大型语言模型(llm)可以帮助这一过程。本研究评估了llm的使用,辅以检索增强生成(RAG),对大约250个单词的开放式答案进行数字评分。我们专注于两门西班牙语技术课程,并评估通用法学硕士。我们的研究结果表明,与单独使用llm相比,RAG提高了分级精度,平均绝对误差(MAE)降低了19.47%,最佳配置达到1.19。我们还注意到法学硕士倾向于给高分,这反映了数据集对高分的不平衡。这项工作展示了将RAG与通用法学硕士结合起来评估专业西班牙语内容的潜力,避免了模型微调的成本和偏差。
{"title":"Grading Open-Ended Questions Using LLMs and RAG","authors":"Jacobo Farray Rodríguez,&nbsp;Antonio Jesús Fernández-García,&nbsp;Elena Verdú","doi":"10.1111/exsy.70174","DOIUrl":"https://doi.org/10.1111/exsy.70174","url":null,"abstract":"<div>\u0000 \u0000 <p>Evaluating open-ended questions is a common and time-consuming task in education. With the continuous advances in natural language processing (NLP), large language models (LLMs) trained on massive datasets can assist in this process. This study evaluates the use of LLMs, complemented by retrieval-augmented generation (RAG), for the numerical grading of open-ended answers of approximately 250 words. We focus on two Spanish-language technical courses and assess general-purpose LLMs. Our results show that RAG improves grading accuracy, achieving reductions in mean absolute error (MAE) of up to 19.47% compared to using LLMs alone, with the best configuration reaching a MAE of 1.19. We also note that LLMs tend to assign high grades, reflecting the dataset's imbalance toward higher scores. This work demonstrates the potential of combining RAG with general-purpose LLMs to evaluate specialised Spanish language content, avoiding the cost and bias of model fine-tuning.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TimeBrush: An Intelligent Expert System for Restoring Historical Images With Temporal and Stylistic Guidance TimeBrush:一个具有时间和风格指导的恢复历史图像的智能专家系统
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-26 DOI: 10.1111/exsy.70175
Kaiyu Zhang

Historical photographs and artworks often suffer from degradation, missing regions, or stylistic corruption due to aging, scanning artefacts, or incomplete archival processes. While recent image inpainting and restoration models achieve plausible visual reconstructions, they often disregard the cultural and temporal context of the content—producing restorations that are visually coherent yet stylistically anachronistic. In this paper, we present TimeBrush, a temporally guided diffusion-based framework for historical image restoration. By conditioning the generation process on explicit temporal prompts (e.g., art period, century), and reinforcing stylistic alignment through a learned style consistency discriminator, TimeBrush faithfully reconstructs missing content while preserving culturally significant visual traits. Our framework integrates a Temporal Prompt Encoder and a Style Consistency Discriminator, allowing restorations to be faithful both temporally and stylistically. TimeBrush improves Style Accuracy by over 7% compared with state-of-the-art baselines, while also having better perceptual quality. These results indicate TimeBrush's promising opportunities for AI-assisted cultural heritage preservation and museum digitization.

历史照片和艺术作品经常因老化、扫描人工制品或不完整的存档过程而遭受退化、缺失区域或风格腐败。虽然最近的图像绘画和修复模型实现了看似合理的视觉重建,但它们往往忽视了内容生产修复的文化和时间背景,这些修复在视觉上是连贯的,但在风格上是不合时宜的。在本文中,我们提出了TimeBrush,一个基于时间引导扩散的历史图像恢复框架。通过将生成过程调节为明确的时间提示(例如,艺术时期,世纪),并通过学习风格一致性鉴别器加强风格对齐,TimeBrush忠实地重建缺失的内容,同时保留具有文化意义的视觉特征。我们的框架集成了一个时间提示编码器和一个风格一致性鉴别器,允许恢复在时间和风格上都是忠实的。与最先进的基线相比,TimeBrush的风格准确性提高了7%以上,同时也具有更好的感知质量。这些结果表明,TimeBrush在人工智能辅助的文化遗产保护和博物馆数字化方面有很大的机会。
{"title":"TimeBrush: An Intelligent Expert System for Restoring Historical Images With Temporal and Stylistic Guidance","authors":"Kaiyu Zhang","doi":"10.1111/exsy.70175","DOIUrl":"https://doi.org/10.1111/exsy.70175","url":null,"abstract":"<div>\u0000 \u0000 <p>Historical photographs and artworks often suffer from degradation, missing regions, or stylistic corruption due to aging, scanning artefacts, or incomplete archival processes. While recent image inpainting and restoration models achieve plausible visual reconstructions, they often disregard the cultural and temporal context of the content—producing restorations that are visually coherent yet stylistically anachronistic. In this paper, we present <i>TimeBrush</i>, a temporally guided diffusion-based framework for historical image restoration. By conditioning the generation process on explicit temporal prompts (e.g., art period, century), and reinforcing stylistic alignment through a learned style consistency discriminator, TimeBrush faithfully reconstructs missing content while preserving culturally significant visual traits. Our framework integrates a Temporal Prompt Encoder and a Style Consistency Discriminator, allowing restorations to be faithful both temporally and stylistically. TimeBrush improves Style Accuracy by over 7% compared with state-of-the-art baselines, while also having better perceptual quality. These results indicate TimeBrush's promising opportunities for AI-assisted cultural heritage preservation and museum digitization.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Branch Cross-Diversion Transformer With Spatial Soft Alignment for Few-Shot Surface Defect Detection 基于空间软对准的双支路交叉导流变压器小弹片表面缺陷检测
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-26 DOI: 10.1111/exsy.70177
Xiaohua Huang, Peilin Li

High-performance surface defect detection is essential for industrial quality inspection, requiring accurate detection and characterisation of surface defects. While deep learning-based methods have advanced this field, challenges persist due to limited sample availability and variation in defect types. To address these challenges, we propose a new detection framework, namely, the Dual-branch Cross-Diversion Transformer with Spatial Soft Alignment, specifically designed for surface defect detection under data-scarce conditions. First, the dual-branch cross-transformer is leveraged to address data scarcity and enhance defect detection sensitivity through a few-shot pipeline. Furthermore, the Adaptive Activation Downsampling module is proposed to capture coarse-grained structural features while preserving fine-grained defect details, ensuring comprehensive surface defect characterisation. Additionally, a Cross-Diversion Self-Attention mechanism further improves multi-scale feature extraction, critical for accurate detection of diverse defect types. Finally, a Spatial Soft Alignment strategy corrects spatial misalignment between detection proposals and defect categories, reducing detection uncertainty. Through extensive experiments on two benchmark industrial datasets, our proposed architecture achieves superior performance compared to state-of-the-art methods, demonstrating its robustness and accuracy. These results demonstrate the effectiveness of the proposed method and its potential to advance surface defect detection techniques.

高性能的表面缺陷检测对于工业质量检测至关重要,它要求对表面缺陷进行准确的检测和表征。虽然基于深度学习的方法推动了这一领域的发展,但由于样本可用性有限和缺陷类型的变化,挑战仍然存在。为了解决这些挑战,我们提出了一种新的检测框架,即双支路交叉导流变压器空间软对准,专门设计用于数据稀缺条件下的表面缺陷检测。首先,利用双支路交叉变压器解决数据稀缺性问题,并通过少射管道提高缺陷检测灵敏度。此外,提出了自适应激活下采样模块,以捕获粗粒度的结构特征,同时保留细粒度的缺陷细节,确保全面的表面缺陷表征。此外,Cross-Diversion Self-Attention机制进一步改进了多尺度特征提取,这对于准确检测各种缺陷类型至关重要。最后,空间软对齐策略纠正了检测建议和缺陷类别之间的空间不对齐,减少了检测的不确定性。通过在两个基准工业数据集上的广泛实验,我们提出的架构与最先进的方法相比具有优越的性能,证明了其鲁棒性和准确性。这些结果证明了所提出的方法的有效性及其在推进表面缺陷检测技术方面的潜力。
{"title":"Dual-Branch Cross-Diversion Transformer With Spatial Soft Alignment for Few-Shot Surface Defect Detection","authors":"Xiaohua Huang,&nbsp;Peilin Li","doi":"10.1111/exsy.70177","DOIUrl":"https://doi.org/10.1111/exsy.70177","url":null,"abstract":"<div>\u0000 \u0000 <p>High-performance surface defect detection is essential for industrial quality inspection, requiring accurate detection and characterisation of surface defects. While deep learning-based methods have advanced this field, challenges persist due to limited sample availability and variation in defect types. To address these challenges, we propose a new detection framework, namely, the Dual-branch Cross-Diversion Transformer with Spatial Soft Alignment, specifically designed for surface defect detection under data-scarce conditions. First, the dual-branch cross-transformer is leveraged to address data scarcity and enhance defect detection sensitivity through a few-shot pipeline. Furthermore, the Adaptive Activation Downsampling module is proposed to capture coarse-grained structural features while preserving fine-grained defect details, ensuring comprehensive surface defect characterisation. Additionally, a Cross-Diversion Self-Attention mechanism further improves multi-scale feature extraction, critical for accurate detection of diverse defect types. Finally, a Spatial Soft Alignment strategy corrects spatial misalignment between detection proposals and defect categories, reducing detection uncertainty. Through extensive experiments on two benchmark industrial datasets, our proposed architecture achieves superior performance compared to state-of-the-art methods, demonstrating its robustness and accuracy. These results demonstrate the effectiveness of the proposed method and its potential to advance surface defect detection techniques.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble Graph Convolutional Networks for Improving the Performance of Aspect-Level Sentiment Analysis 改进方面级情感分析性能的集成图卷积网络
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1111/exsy.70169
Huyen Trang Phan, Van Du Nguyen, Ngoc Thanh Nguyen

Aspect-level sentiment analysis (ALSA) is the process of determining the emotional polarity that people have towards aspects of topics or entities expressed in their opinions. ALSA is increasingly integrated into many practical applications to make them more user-friendly and suitable for users' psychological and emotional trends. Therefore, the performance of ALSA methods is increasingly being studied by scientists for improvement. Various approaches have been proposed for ALSA, the latest of which is Graph Convolutional Networks (GCNs). Although they have performed well, previous GCN-based methods still fail to capture all important features from opinions. This raises the question of whether combining ALSA-based GCNs can improve the ability of previous methods to capture important features. This motivates us to propose the ALSA method based on the Ensemble Graph Convolutional Networks (EGCNs). The objective of the proposed method is to capture features in a manner that is both independent and joint, in order to leverage the advantages of jointly learning features while also benefiting from the strengths of learning features independently. The proposed method includes the following main steps: (i) data representation based on the BERT model; (ii) extracting syntactic, semantic and contextual features based on the ASGCN, ATGCN and ASCNN models, respectively; (iii) combining the extracted feature vectors into a general feature vector based on the fusion mechanism; (iv) sentiment analysis based on the Softmax function. To demonstrate the performance of the EGCNs model, it is experimented on three benchmark datasets and compared with the previous methods before being combined.

方面层面情感分析(ALSA)是确定人们对其观点中所表达的主题或实体方面的情感极性的过程。ALSA越来越多地融入到许多实际应用中,使其更加人性化,更适合用户的心理和情感趋势。因此,ALSA方法的性能越来越受到科学家们的关注。针对als已经提出了多种方法,其中最新的是图卷积网络(GCNs)。尽管它们表现良好,但以前基于遗传神经网络的方法仍然无法从观点中捕获所有重要特征。这就提出了一个问题,即结合基于alsa的GCNs是否可以提高先前方法捕获重要特征的能力。这促使我们提出了基于集成图卷积网络(EGCNs)的ALSA方法。提出的方法的目标是以独立和联合的方式捕获特征,以利用联合学习特征的优势,同时也受益于独立学习特征的优势。提出的方法包括以下几个主要步骤:(i)基于BERT模型的数据表示;(ii)分别基于ASGCN、ATGCN和ASCNN模型提取句法、语义和上下文特征;(iii)基于融合机制将提取的特征向量组合成一个通用的特征向量;(iv)基于Softmax函数的情感分析。为了验证EGCNs模型的性能,在三个基准数据集上进行了实验,并与之前的方法进行了比较。
{"title":"Ensemble Graph Convolutional Networks for Improving the Performance of Aspect-Level Sentiment Analysis","authors":"Huyen Trang Phan,&nbsp;Van Du Nguyen,&nbsp;Ngoc Thanh Nguyen","doi":"10.1111/exsy.70169","DOIUrl":"https://doi.org/10.1111/exsy.70169","url":null,"abstract":"<div>\u0000 \u0000 <p>Aspect-level sentiment analysis (ALSA) is the process of determining the emotional polarity that people have towards aspects of topics or entities expressed in their opinions. ALSA is increasingly integrated into many practical applications to make them more user-friendly and suitable for users' psychological and emotional trends. Therefore, the performance of ALSA methods is increasingly being studied by scientists for improvement. Various approaches have been proposed for ALSA, the latest of which is Graph Convolutional Networks (GCNs). Although they have performed well, previous GCN-based methods still fail to capture all important features from opinions. This raises the question of whether combining ALSA-based GCNs can improve the ability of previous methods to capture important features. This motivates us to propose the ALSA method based on the Ensemble Graph Convolutional Networks (EGCNs). The objective of the proposed method is to capture features in a manner that is both independent and joint, in order to leverage the advantages of jointly learning features while also benefiting from the strengths of learning features independently. The proposed method includes the following main steps: (i) data representation based on the BERT model; (ii) extracting syntactic, semantic and contextual features based on the ASGCN, ATGCN and ASCNN models, respectively; (iii) combining the extracted feature vectors into a general feature vector based on the fusion mechanism; (iv) sentiment analysis based on the Softmax function. To demonstrate the performance of the EGCNs model, it is experimented on three benchmark datasets and compared with the previous methods before being combined.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalised Recommendation With Federated Lightweight Graph Convolutional Networks 基于联邦轻量级图卷积网络的个性化推荐
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-24 DOI: 10.1111/exsy.70173
Yansong Zhu, Yuyang Guo

Federated recommendation systems aim to provide personalised services in decentralised environments while preserving user privacy. However, under strict privacy constraints and limited local information, existing federated models struggle to capture user-item interactions effectively. This paper proposes FedLGCN, a novel federated recommendation framework that combines Lightweight Graph Convolutional Networks with a privacy-enhanced local training protocol. A hash-based privacy-aware graph augmentation strategy is introduced to enrich each client's local subgraph without disclosing sensitive neighbour information. Additionally, Local Differential Privacy is employed to perturb gradients before aggregation, providing robust protection against inference attacks. The Lightweight Graph Convolutional Networks-based embedding module enables efficient and scalable representation learning with reduced communication and computational costs. Extensive experiments on three public benchmark datasets demonstrate that FedLGCN achieves a well-balanced trade-off among communication cost, recommendation accuracy and privacy protection, consistently outperforming several baseline methods in federated recommendation.

联邦推荐系统旨在在分散的环境中提供个性化服务,同时保护用户隐私。然而,在严格的隐私约束和有限的本地信息下,现有的联邦模型很难有效地捕获用户-项目交互。本文提出了一种新的联邦推荐框架federlgcn,它将轻量级图卷积网络与增强隐私的局部训练协议相结合。引入了一种基于哈希的隐私感知图增强策略,在不泄露敏感邻居信息的情况下丰富每个客户端的本地子图。此外,在聚合之前使用局部差分隐私对梯度进行扰动,提供对推理攻击的鲁棒保护。基于轻量级图形卷积网络的嵌入模块能够高效和可扩展的表示学习,减少通信和计算成本。在三个公共基准数据集上的大量实验表明,federlgcn在通信成本、推荐准确性和隐私保护之间取得了很好的平衡,在联邦推荐中始终优于几种基线方法。
{"title":"Personalised Recommendation With Federated Lightweight Graph Convolutional Networks","authors":"Yansong Zhu,&nbsp;Yuyang Guo","doi":"10.1111/exsy.70173","DOIUrl":"https://doi.org/10.1111/exsy.70173","url":null,"abstract":"<div>\u0000 \u0000 <p>Federated recommendation systems aim to provide personalised services in decentralised environments while preserving user privacy. However, under strict privacy constraints and limited local information, existing federated models struggle to capture user-item interactions effectively. This paper proposes FedLGCN, a novel federated recommendation framework that combines Lightweight Graph Convolutional Networks with a privacy-enhanced local training protocol. A hash-based privacy-aware graph augmentation strategy is introduced to enrich each client's local subgraph without disclosing sensitive neighbour information. Additionally, Local Differential Privacy is employed to perturb gradients before aggregation, providing robust protection against inference attacks. The Lightweight Graph Convolutional Networks-based embedding module enables efficient and scalable representation learning with reduced communication and computational costs. Extensive experiments on three public benchmark datasets demonstrate that FedLGCN achieves a well-balanced trade-off among communication cost, recommendation accuracy and privacy protection, consistently outperforming several baseline methods in federated recommendation.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145626099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSSFL: A Group Signature and Smart Contract-Based Framework for Privacy-Preserving Federated Learning GSSFL:一种基于群签名和智能合约的隐私保护联邦学习框架
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-23 DOI: 10.1111/exsy.70166
Yihao Wang, Ting Yang, Chenxi Xiong

Federated learning has emerged as a powerful paradigm for collaborative machine learning across multiple parties, holding considerable potential for modern industries. However, its inherently decentralised and collaborative nature raises critical concerns about data security and user privacy. Sensitive information—such as user preferences, behaviours, and identities—remains susceptible to inference attacks, revealing the limitations of conventional privacy-preserving techniques in existing federated learning frameworks. To address these challenges, this paper presents GSSFL, a novel federated learning architecture that integrates smart contracts and group signatures to enhance both privacy protection and system trustworthiness. GSSFL enables secure and verifiable data exchange without compromising user anonymity, while its decentralised design encourages broader participation in federated learning processes. Experimental results demonstrate that GSSFL effectively satisfies the demands of privacy-preserving data sharing with minimal performance overhead.

联邦学习已经成为跨多方协作机器学习的强大范例,在现代工业中具有相当大的潜力。然而,其固有的去中心化和协作性引发了对数据安全和用户隐私的严重担忧。敏感信息(如用户偏好、行为和身份)仍然容易受到推理攻击,这揭示了现有联邦学习框架中传统隐私保护技术的局限性。为了应对这些挑战,本文提出了GSSFL,这是一种新颖的联邦学习架构,它集成了智能合约和组签名,以增强隐私保护和系统可信度。GSSFL在不影响用户匿名性的情况下实现安全和可验证的数据交换,而其分散的设计鼓励更广泛地参与联邦学习过程。实验结果表明,GSSFL能以最小的性能开销有效地满足隐私保护数据共享的需求。
{"title":"GSSFL: A Group Signature and Smart Contract-Based Framework for Privacy-Preserving Federated Learning","authors":"Yihao Wang,&nbsp;Ting Yang,&nbsp;Chenxi Xiong","doi":"10.1111/exsy.70166","DOIUrl":"https://doi.org/10.1111/exsy.70166","url":null,"abstract":"<div>\u0000 \u0000 <p>Federated learning has emerged as a powerful paradigm for collaborative machine learning across multiple parties, holding considerable potential for modern industries. However, its inherently decentralised and collaborative nature raises critical concerns about data security and user privacy. Sensitive information—such as user preferences, behaviours, and identities—remains susceptible to inference attacks, revealing the limitations of conventional privacy-preserving techniques in existing federated learning frameworks. To address these challenges, this paper presents GSSFL, a novel federated learning architecture that integrates smart contracts and group signatures to enhance both privacy protection and system trustworthiness. GSSFL enables secure and verifiable data exchange without compromising user anonymity, while its decentralised design encourages broader participation in federated learning processes. Experimental results demonstrate that GSSFL effectively satisfies the demands of privacy-preserving data sharing with minimal performance overhead.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Causal Discovery Towards High-Dimensional Streaming Features 面向高维流特征的局部因果发现
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-21 DOI: 10.1111/exsy.70170
Waqar Khan, Brekhna Brekhna, Muhammad Sadiq Hassan Zada, Shina Niu, Dong Siqi, Lingfu Kong, Yajun Xie

Causal discovery focuses on identifying a target's direct causes and effects (e.g., class label) feature of interest in a Bayesian network (BN). Existing causal discovery primarily includes global and local learning algorithms, which must access the whole feature space before the learning process starts. However, many real-world applications continuously generate features in real-time and demand stream processing of features for just-in-time decision-making. In addition, the local and global learning algorithms either emphasise accuracy or computing efficiency over both. Therefore, to address these problems and handle dynamic high-dimensional feature space, we proposed a novel local causal discovery algorithm based on streaming features that consider improving and balancing both computational efficiency and prediction accuracy denoted as Local Causal Discovery Towards High-Dimensional Streaming Features LCDSF$$ left({LCD}_{SF}right) $$. More specifically, to attain this objective, LCDSF$$ {LCD}_{SF} $$ dynamically integrates V$$ mathcal{V} $$- and N$$ mathcal{N} $$-structure to learn the Markov blanket (MB) and simultaneously distinguishes the direct causes (parents) from direct effects (children) and parents–children (PC) from spouses of the target feature. Thus accomplishes the balance between efficiency and accuracy of prediction. The proposed algorithm has been extensively evaluated on 10 benchmark BNs and 10 real-world datasets. The results show that the proposed algorithm outperformed the existing state-of-the-art baseline algorithms. The source code is available at: https://github.com/vickykhan89/LCDSF.

因果发现侧重于识别贝叶斯网络(BN)中感兴趣的目标的直接因果(例如,类标签)特征。现有的因果发现主要包括全局和局部学习算法,它们必须在学习过程开始之前访问整个特征空间。然而,许多现实世界的应用程序不断地实时生成特征,并且需要对特征进行流处理以进行及时决策。此外,局部和全局学习算法要么强调准确性,要么强调计算效率。因此,为了解决这些问题并处理动态高维特征空间,我们提出了一种新的基于流特征的局部因果发现算法,该算法考虑了计算效率和预测精度的提高和平衡,称为面向高维流特征的局部因果发现算法LCD SF $$ left({LCD}_{SF}right) $$。更具体地说,为了实现这一目标,LCD SF $$ {LCD}_{SF} $$动态集成V $$ mathcal{V} $$ -和N $$ mathcal{N} $$ -结构学习马尔可夫毯(MB),同时区分直接原因(父母)来自直接影响(子女)和父母-子女(PC)来自配偶的目标特征。从而实现了预测效率与准确性的平衡。该算法已在10个基准bp和10个真实世界数据集上进行了广泛的评估。结果表明,该算法优于现有的最先进的基线算法。源代码可从https://github.com/vickykhan89/LCDSF获得。
{"title":"Local Causal Discovery Towards High-Dimensional Streaming Features","authors":"Waqar Khan,&nbsp;Brekhna Brekhna,&nbsp;Muhammad Sadiq Hassan Zada,&nbsp;Shina Niu,&nbsp;Dong Siqi,&nbsp;Lingfu Kong,&nbsp;Yajun Xie","doi":"10.1111/exsy.70170","DOIUrl":"https://doi.org/10.1111/exsy.70170","url":null,"abstract":"<div>\u0000 \u0000 <p>Causal discovery focuses on identifying a target's direct causes and effects (e.g., class label) feature of interest in a Bayesian network (BN). Existing causal discovery primarily includes global and local learning algorithms, which must access the whole feature space before the learning process starts. However, many real-world applications continuously generate features in real-time and demand stream processing of features for just-in-time decision-making. In addition, the local and global learning algorithms either emphasise accuracy or computing efficiency over both. Therefore, to address these problems and handle dynamic high-dimensional feature space, we proposed a novel local causal discovery algorithm based on streaming features that consider improving and balancing both computational efficiency and prediction accuracy denoted as <span>L</span>ocal <span>C</span>ausal <span>D</span>iscovery Towards High-Dimensional <span>S</span>treaming <span>F</span>eatures <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mfenced>\u0000 <msub>\u0000 <mi>LCD</mi>\u0000 <mi>SF</mi>\u0000 </msub>\u0000 </mfenced>\u0000 </mrow>\u0000 <annotation>$$ left({LCD}_{SF}right) $$</annotation>\u0000 </semantics></math>. More specifically, to attain this objective, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mi>LCD</mi>\u0000 <mi>SF</mi>\u0000 </msub>\u0000 </mrow>\u0000 <annotation>$$ {LCD}_{SF} $$</annotation>\u0000 </semantics></math> dynamically integrates <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>V</mi>\u0000 </mrow>\u0000 <annotation>$$ mathcal{V} $$</annotation>\u0000 </semantics></math>- and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>N</mi>\u0000 </mrow>\u0000 <annotation>$$ mathcal{N} $$</annotation>\u0000 </semantics></math>-structure to learn the Markov blanket (MB) and simultaneously distinguishes the direct causes (parents) from direct effects (children) and parents–children (PC) from spouses of the target feature. Thus accomplishes the balance between efficiency and accuracy of prediction. The proposed algorithm has been extensively evaluated on 10 benchmark BNs and 10 real-world datasets. The results show that the proposed algorithm outperformed the existing state-of-the-art baseline algorithms. The source code is available at: https://github.com/vickykhan89/LCDSF.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145572324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Knowledge-Driven Expert System for Robust Cardiac Event Detection Using Multi-Scale Temporal Transformers 基于多尺度时间变换的心脏事件鲁棒检测专家系统
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-20 DOI: 10.1111/exsy.70171
Jianyu Huang, Chunyan Jiang, Xiaomin Huang, Yuting Liu, Fang Li, Rong Deng, Lin Xu, Wanqing Wu

Cardiovascular diseases remain the leading cause of death worldwide, highlighting the need for expert systems that enable continuous and interpretable cardiac monitoring. We present ArmFormer, a knowledge-driven expert system that leverages Transformer-based reasoning for robust cardiac event detection from wearable armband electrocardiogram signals. The model integrates domain-guided multi-scale patch encoding to capture waveform morphology and rhythm dependencies, while local gated Transformer blocks enhance temporal continuity and suppress noise-induced variability. A lead-wise attention mechanism coupled with gradient-based visualisation provides interpretability by highlighting clinically relevant regions such as QRS complexes, P waves, and ST segments. On an in-house cohort of 99 subjects comprising 6211 normal and 10,030 abnormal 10-s segments, ArmFormer achieved 91.66% accuracy, 91.57% F1-score, 91.41% sensitivity, and 97.36% AUC under a subject-exclusive protocol that prevents patient-level information leakage. Compared with convolutional and residual baselines, AUC improved by up to 5%, while floating-point operations and parameters were reduced by 29-fold and 47-fold, respectively, achieving 2.58 ms inference latency per segment. External validation on CPSC2018 and Chapman showed accuracies of 83.65% and 94.69%, with AUCs of 96.69% and 99.38%, respectively. By combining domain-guided encoding, noise-robust temporal reasoning, and interpretable attention, ArmFormer provides a practical and reliable framework for expert-level cardiac event detection in wearable monitoring scenarios.

心血管疾病仍然是世界范围内死亡的主要原因,这突出了对能够实现连续和可解释的心脏监测的专家系统的需求。我们介绍了ArmFormer,这是一个知识驱动的专家系统,利用基于变压器的推理,从可穿戴臂带心电图信号中进行稳健的心脏事件检测。该模型集成了域引导的多尺度补丁编码来捕获波形形态和节奏依赖性,而局部门控变压器块增强了时间连续性并抑制了噪声引起的变异性。导联注意机制与基于梯度的可视化结合,通过突出临床相关区域,如QRS复合物、P波和ST段,提供了可解释性。在包括6211名正常和10030名异常10-s段的99名受试者的内部队列中,ArmFormer在受试者专有协议下实现了91.66%的准确率、91.57%的f1评分、91.41%的灵敏度和97.36%的AUC,防止了患者层面的信息泄露。与卷积基线和残差基线相比,AUC提高了5%,而浮点运算和参数分别减少了29倍和47倍,每段推理延迟达到2.58 ms。CPSC2018和Chapman的外部验证准确率分别为83.65%和94.69%,auc分别为96.69%和99.38%。ArmFormer结合了领域引导编码、噪声鲁棒时间推理和可解释注意力,为可穿戴监测场景中的专家级心脏事件检测提供了实用可靠的框架。
{"title":"A Knowledge-Driven Expert System for Robust Cardiac Event Detection Using Multi-Scale Temporal Transformers","authors":"Jianyu Huang,&nbsp;Chunyan Jiang,&nbsp;Xiaomin Huang,&nbsp;Yuting Liu,&nbsp;Fang Li,&nbsp;Rong Deng,&nbsp;Lin Xu,&nbsp;Wanqing Wu","doi":"10.1111/exsy.70171","DOIUrl":"https://doi.org/10.1111/exsy.70171","url":null,"abstract":"<div>\u0000 \u0000 <p>Cardiovascular diseases remain the leading cause of death worldwide, highlighting the need for expert systems that enable continuous and interpretable cardiac monitoring. We present ArmFormer, a knowledge-driven expert system that leverages Transformer-based reasoning for robust cardiac event detection from wearable armband electrocardiogram signals. The model integrates domain-guided multi-scale patch encoding to capture waveform morphology and rhythm dependencies, while local gated Transformer blocks enhance temporal continuity and suppress noise-induced variability. A lead-wise attention mechanism coupled with gradient-based visualisation provides interpretability by highlighting clinically relevant regions such as QRS complexes, P waves, and ST segments. On an in-house cohort of 99 subjects comprising 6211 normal and 10,030 abnormal 10-s segments, ArmFormer achieved 91.66% accuracy, 91.57% F1-score, 91.41% sensitivity, and 97.36% AUC under a subject-exclusive protocol that prevents patient-level information leakage. Compared with convolutional and residual baselines, AUC improved by up to 5%, while floating-point operations and parameters were reduced by 29-fold and 47-fold, respectively, achieving 2.58 ms inference latency per segment. External validation on CPSC2018 and Chapman showed accuracies of 83.65% and 94.69%, with AUCs of 96.69% and 99.38%, respectively. By combining domain-guided encoding, noise-robust temporal reasoning, and interpretable attention, ArmFormer provides a practical and reliable framework for expert-level cardiac event detection in wearable monitoring scenarios.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145547223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Supervised Variational Autoencoder for Incomplete Multi-View Classification 不完全多视图分类的监督变分自编码器
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-20 DOI: 10.1111/exsy.70172
Yi Xu, Anchi Chen

Although significant progress has been made in multi-view classification over the past few decades, handling multi-view data with arbitrary view missing is still a challenge. To address the challenge of incomplete multi-view classification, we propose a novel framework named Supervised Variational Incomplete Multi-View Classification (SVIMC) network, which completes incomplete multi-view data and performs classification predictions. Specifically, we design a supervised multi-view variational autoencoder for missing view completion, which involves a Product of Experts (PoE) network to obtain the latent joint representation of available views. This representation is then fed into the view-specific decoders of the missing views to generate the imputations. Besides, we jointly optimise the classification network and the missing view completion module, allowing them to mutually promote each other. Moreover, by adopting the GradNorm method, we significantly reduce the difficulty of model training. Extensive experiments have been conducted to demonstrate the effectiveness of our method in terms of classification accuracy, missing view imputation visualisation and ablation study.

虽然在过去的几十年里,多视图分类已经取得了很大的进展,但处理任意视图缺失的多视图数据仍然是一个挑战。为了解决不完全多视图分类的挑战,我们提出了一种新的框架,称为监督变分不完全多视图分类(SVIMC)网络,该网络完成不完全多视图数据并进行分类预测。具体来说,我们设计了一个有监督的多视图变分自编码器,用于缺失视图补全,该编码器使用专家产品(PoE)网络来获得可用视图的潜在联合表示。然后将这种表示输入到缺失视图的特定于视图的解码器中,以生成估算。并对分类网络和缺失视图补全模块进行了共同优化,使两者相互促进。此外,通过采用GradNorm方法,我们显著降低了模型训练的难度。大量的实验证明了我们的方法在分类精度、缺失视点输入可视化和消融研究方面的有效性。
{"title":"A Supervised Variational Autoencoder for Incomplete Multi-View Classification","authors":"Yi Xu,&nbsp;Anchi Chen","doi":"10.1111/exsy.70172","DOIUrl":"https://doi.org/10.1111/exsy.70172","url":null,"abstract":"<div>\u0000 \u0000 <p>Although significant progress has been made in multi-view classification over the past few decades, handling multi-view data with arbitrary view missing is still a challenge. To address the challenge of incomplete multi-view classification, we propose a novel framework named Supervised Variational Incomplete Multi-View Classification (SVIMC) network, which completes incomplete multi-view data and performs classification predictions. Specifically, we design a supervised multi-view variational autoencoder for missing view completion, which involves a Product of Experts (PoE) network to obtain the latent joint representation of available views. This representation is then fed into the view-specific decoders of the missing views to generate the imputations. Besides, we jointly optimise the classification network and the missing view completion module, allowing them to mutually promote each other. Moreover, by adopting the GradNorm method, we significantly reduce the difficulty of model training. Extensive experiments have been conducted to demonstrate the effectiveness of our method in terms of classification accuracy, missing view imputation visualisation and ablation study.</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"43 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145572322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Multi-Scale Feature Enhancement Network for Person Re-ID 一种轻量级的多尺度特征增强网络
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-14 DOI: 10.1111/exsy.70165
Qihao Liu, Ming Ma, Pengyuan Shen, Bo Hu, Xin Yuan, Tiancun Guo, Mingliang Gao

Recent advances in deep learning have significantly propelled progress in person re-identification. However, many current solutions often prioritize architectural optimization. Although this approach has led to considerable performance improvements, it may inadvertently overlook the critical challenge of suppressing interference from complex background noise. To explore a path toward addressing this aspect, we propose a Multi-scale Feature Enhancement Network (MSFENet). Our approach includes a spatial-frequency fusion module designed to guide the network's attention toward pedestrian-specific regions. Furthermore, the incorporation of frequency-domain cues is intended to facilitate the capture of fine-grained details, thereby potentially enhancing robustness. We also design a Multi-Granularity Fusion (MGFusion) module to help alleviate overfitting and information loss during feature interaction. Experimental results indicate that MSFENet achieves competitive performance across evaluated tasks on the Market1501 and MSMT17 datasets, as well as under cross-domain settings (MSMT17 → Market1501).

深度学习的最新进展极大地推动了人类再识别的进步。然而,许多当前的解决方案通常优先考虑架构优化。尽管这种方法已经带来了相当大的性能改进,但它可能无意中忽略了抑制复杂背景噪声干扰的关键挑战。为了探索解决这一问题的途径,我们提出了一个多尺度特征增强网络(MSFENet)。我们的方法包括一个空间频率融合模块,旨在将网络的注意力引导到行人特定区域。此外,频域线索的结合旨在促进细粒度细节的捕获,从而潜在地增强鲁棒性。我们还设计了一个多粒度融合(MGFusion)模块,以帮助减轻特征交互过程中的过拟合和信息丢失。实验结果表明,MSFENet在Market1501和MSMT17数据集以及跨域设置(MSMT17→Market1501)下实现了跨评估任务的竞争性性能。
{"title":"A Lightweight Multi-Scale Feature Enhancement Network for Person Re-ID","authors":"Qihao Liu,&nbsp;Ming Ma,&nbsp;Pengyuan Shen,&nbsp;Bo Hu,&nbsp;Xin Yuan,&nbsp;Tiancun Guo,&nbsp;Mingliang Gao","doi":"10.1111/exsy.70165","DOIUrl":"https://doi.org/10.1111/exsy.70165","url":null,"abstract":"<div>\u0000 \u0000 <p>Recent advances in deep learning have significantly propelled progress in person re-identification. However, many current solutions often prioritize architectural optimization. Although this approach has led to considerable performance improvements, it may inadvertently overlook the critical challenge of suppressing interference from complex background noise. To explore a path toward addressing this aspect, we propose a Multi-scale Feature Enhancement Network (MSFENet). Our approach includes a spatial-frequency fusion module designed to guide the network's attention toward pedestrian-specific regions. Furthermore, the incorporation of frequency-domain cues is intended to facilitate the capture of fine-grained details, thereby potentially enhancing robustness. We also design a Multi-Granularity Fusion (MGFusion) module to help alleviate overfitting and information loss during feature interaction. Experimental results indicate that MSFENet achieves competitive performance across evaluated tasks on the Market1501 and MSMT17 datasets, as well as under cross-domain settings (MSMT17 → Market1501).</p>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145522136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Expert Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1