首页 > 最新文献

IEEE Open Journal of the Computer Society最新文献

英文 中文
2025 Reviewers List* 2025审稿人名单*
Pub Date : 2026-01-19 DOI: 10.1109/OJCS.2026.3653583
{"title":"2025 Reviewers List*","authors":"","doi":"10.1109/OJCS.2026.3653583","DOIUrl":"https://doi.org/10.1109/OJCS.2026.3653583","url":null,"abstract":"","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11358645","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AG-CLIP: Attribute-Guided CLIP for Zero-Shot Fine-Grained Recognition AG-CLIP:用于零射击细粒度识别的属性引导CLIP
Pub Date : 2026-01-15 DOI: 10.1109/OJCS.2026.3654171
Jamil Ahmad;Mustaqeem Khan;Wail Guiaeab;Abdulmotaleb Elsaddik;Giulia De Masi;Fakhri Karray
Zero-shot fine-grained recognition is challenging due to high visual similarities between classes and the inferior encoding of fine-grained features in embedding models. In this work, we present an attribute-guided Contrastive Language-Image Pre-training (AG-CLIP) model with an additional attribute encoder. Our approach first identifies relevant visual attributes from the textual class descriptions using an attribute mining module leveraging a large language model (LLM) GPT-4o. The attributes are then used to construct prompts for an open vocabulary object/region detector to extract relevant corresponding image regions. The attribute text, along with focused regions of the input, then guides the CLIP model to focus on these discriminative attributes during fine-tuning through a context-attribute fusion module. Our attribute-guided attention mechanism allows CLIP to effectively disambiguate fine-grained classes by highlighting their distinctive attributes without requiring fine-tuning or additional training data on unseen classes. We evaluate our approach on the CUB-200-2011 and plant disease datasets, achieving 73.3% and 84.6% accuracy, respectively. Our method achieves state-of-the-art zero-shot performance, outperforming prior methods that rely on external knowledge bases or complex meta-learning strategies. The strong results demonstratethe effectiveness of injecting generic attribute awareness into powerful vision-language models like CLIP for tackling fine-grained recognition in a zero-shot manner.
由于类之间的高度视觉相似性和嵌入模型中细粒度特征的较差编码,零射击细粒度识别具有挑战性。在这项工作中,我们提出了一个带有附加属性编码器的属性导向对比语言图像预训练(AG-CLIP)模型。我们的方法首先使用利用大型语言模型(LLM) gpt - 40的属性挖掘模块从文本类描述中识别相关的视觉属性。然后使用这些属性构造提示,供开放词汇表对象/区域检测器提取相关的相应图像区域。然后,属性文本以及输入的重点区域将引导CLIP模型在通过上下文-属性融合模块进行微调期间关注这些区别性属性。我们的属性引导注意力机制允许CLIP通过突出细粒度类的独特属性来有效地消除歧义,而不需要对未见过的类进行微调或额外的训练数据。我们在CUB-200-2011和植物病害数据集上评估了我们的方法,分别达到了73.3%和84.6%的准确率。我们的方法实现了最先进的零射击性能,优于依赖外部知识库或复杂元学习策略的先前方法。强有力的结果表明,将通用属性感知注入到强大的视觉语言模型(如CLIP)中,以零射击的方式处理细粒度识别是有效的。
{"title":"AG-CLIP: Attribute-Guided CLIP for Zero-Shot Fine-Grained Recognition","authors":"Jamil Ahmad;Mustaqeem Khan;Wail Guiaeab;Abdulmotaleb Elsaddik;Giulia De Masi;Fakhri Karray","doi":"10.1109/OJCS.2026.3654171","DOIUrl":"https://doi.org/10.1109/OJCS.2026.3654171","url":null,"abstract":"Zero-shot fine-grained recognition is challenging due to high visual similarities between classes and the inferior encoding of fine-grained features in embedding models. In this work, we present an attribute-guided Contrastive Language-Image Pre-training (AG-CLIP) model with an additional attribute encoder. Our approach first identifies relevant visual attributes from the textual class descriptions using an attribute mining module leveraging a large language model (LLM) GPT-4o. The attributes are then used to construct prompts for an open vocabulary object/region detector to extract relevant corresponding image regions. The attribute text, along with focused regions of the input, then guides the CLIP model to focus on these discriminative attributes during fine-tuning through a context-attribute fusion module. Our attribute-guided attention mechanism allows CLIP to effectively disambiguate fine-grained classes by highlighting their distinctive attributes without requiring fine-tuning or additional training data on unseen classes. We evaluate our approach on the CUB-200-2011 and plant disease datasets, achieving 73.3% and 84.6% accuracy, respectively. Our method achieves state-of-the-art zero-shot performance, outperforming prior methods that rely on external knowledge bases or complex meta-learning strategies. The strong results demonstratethe effectiveness of injecting generic attribute awareness into powerful vision-language models like CLIP for tackling fine-grained recognition in a zero-shot manner.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"365-375"},"PeriodicalIF":0.0,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352798","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MM-3DAttNet: Multi-Modal 3D Attention Network for MGMT Methylation Prediction MM-3DAttNet:用于MGMT甲基化预测的多模态3D关注网络
Pub Date : 2026-01-14 DOI: 10.1109/OJCS.2026.3654173
Gayathri Ramasamy;Tripty Singh;Xiaohui Yuan;Ganesh R Naik
The methylation status of the $O^{6}$-methylguanine-DNA methyltransferase (MGMT) promoter is an established prognostic and predictive biomarker in glioma, particularly for estimating response to alkylating chemotherapy such as temozolomide. However, many existing radiogenomic methods remain constrained by invasive biopsy dependence, slice-wise 2D modelling, limited use of multi-modal MRI, and insufficient interpretability, which collectively impede clinical translation. We propose MM-3DAttNet, a multi-modal 3D attention network for noninvasive prediction of MGMT promoter methylation status from pre-operative multiparametric brain MRI. The model employs four modality-specific 3D CNN encoder branches (T1, T1ce, T2, and FLAIR) and integrates them using a cross-modality attention fusion module to capture complementary diagnostic cues. MM-3DAttNet was trained and evaluated on the BraTS 2021 cohort comprising 585 glioma cases with MGMT labels, achieving an average accuracy of 91.6%, $_{1}$-score of 89.9%, and AUC of 0.925 under five-fold cross-validation. Interpretability was supported using Grad-CAM saliency maps, which consistently emphasized clinically relevant regions such as enhancing tumour boundaries and peritumoural oedema. Ablation experiments verified the importance of multi-modal learning and attention-based fusion, with the most pronounced performance reductions observed when excluding T1ce or FLAIR. Overall, MM-3DAttNet provides an accurate and interpretable radiogenomic framework for MGMT methylation assessment and supports future validation in multi-centre settings and integration into MRI-based decision-support workflows for glioma management.
甲基鸟嘌呤- dna甲基转移酶(MGMT)启动子的甲基化状态是胶质瘤的预后和预测性生物标志物,特别是用于评估对烷基化化疗(如替莫唑胺)的反应。然而,许多现有的放射基因组学方法仍然受到侵入性活检依赖、分层2D建模、多模态MRI使用有限以及可解释性不足的限制,这些因素共同阻碍了临床翻译。我们提出MM-3DAttNet,这是一个多模态3D关注网络,用于通过术前多参数脑MRI无创预测MGMT启动子甲基化状态。该模型采用了四个模态特定的3D CNN编码器分支(T1、T1ce、T2和FLAIR),并使用跨模态注意力融合模块将它们集成在一起,以捕获互补的诊断线索。MM-3DAttNet在BraTS 2021队列中进行训练和评估,该队列包括585例带有MGMT标签的胶质瘤病例,在五倍交叉验证下,平均准确率为91.6%,$_{1}$-评分为89.9%,AUC为0.925。Grad-CAM显著性图支持了可解释性,该图始终强调临床相关区域,如增强肿瘤边界和肿瘤周围水肿。消融实验证实了多模态学习和基于注意的融合的重要性,当排除T1ce或FLAIR时,观察到最明显的表现下降。总的来说,MM-3DAttNet为MGMT甲基化评估提供了一个准确的、可解释的放射基因组学框架,支持未来在多中心环境下的验证,并集成到基于mri的神经胶质瘤管理决策支持工作流程中。
{"title":"MM-3DAttNet: Multi-Modal 3D Attention Network for MGMT Methylation Prediction","authors":"Gayathri Ramasamy;Tripty Singh;Xiaohui Yuan;Ganesh R Naik","doi":"10.1109/OJCS.2026.3654173","DOIUrl":"https://doi.org/10.1109/OJCS.2026.3654173","url":null,"abstract":"The methylation status of the <inline-formula><tex-math>$O^{6}$</tex-math></inline-formula>-methylguanine-DNA methyltransferase (MGMT) promoter is an established prognostic and predictive biomarker in glioma, particularly for estimating response to alkylating chemotherapy such as temozolomide. However, many existing radiogenomic methods remain constrained by invasive biopsy dependence, slice-wise 2D modelling, limited use of multi-modal MRI, and insufficient interpretability, which collectively impede clinical translation. We propose MM-3DAttNet, a multi-modal 3D attention network for noninvasive prediction of MGMT promoter methylation status from pre-operative multiparametric brain MRI. The model employs four modality-specific 3D CNN encoder branches (T1, T1ce, T2, and FLAIR) and integrates them using a cross-modality attention fusion module to capture complementary diagnostic cues. MM-3DAttNet was trained and evaluated on the BraTS 2021 cohort comprising 585 glioma cases with MGMT labels, achieving an average accuracy of 91.6%, <inline-formula><tex-math>$_{1}$</tex-math></inline-formula>-score of 89.9%, and AUC of 0.925 under five-fold cross-validation. Interpretability was supported using Grad-CAM saliency maps, which consistently emphasized clinically relevant regions such as enhancing tumour boundaries and peritumoural oedema. Ablation experiments verified the importance of multi-modal learning and attention-based fusion, with the most pronounced performance reductions observed when excluding T1ce or FLAIR. Overall, MM-3DAttNet provides an accurate and interpretable radiogenomic framework for MGMT methylation assessment and supports future validation in multi-centre settings and integration into MRI-based decision-support workflows for glioma management.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"343-353"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352853","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMO-CARE: EEG Multi-Scale Temporal Modeling With Channel-Aware Feature Attention for Robust Subject-Independent Emotion Recognition 基于通道感知特征的脑电多尺度时间模型鲁棒性独立于主体的情绪识别
Pub Date : 2026-01-13 DOI: 10.1109/OJCS.2026.3653766
Yeganeh Abdollahinejad;Ahmad Mousavi;Petros Siaplaouras;Zois Boukouvalas;Roberto Corizzo
Electroencephalography (EEG)-based emotion recognition holds promise for real-time mental health monitoring, adaptive interfaces, and affective computing. However, accurate prediction across individuals remains challenging due to inter-subject variability and the non-stationary nature of EEG signals. To address this, we propose EMO-CARE, a lightweight deep learning framework that integrates multi-scale temporal convolutional networks with feature-level self-attention operating on multi-scale temporal representations. This architecture captures emotional patterns across diverse neural timescales while adaptively weighting multi-scale temporal features based on their relevance. Evaluated under the rigorous Leave-One-Subject-Out (LOSO) protocol on three benchmark datasets: SEED, SEED-V, and DREAMER, EMO-CARE achieves state-of-the-art accuracy with low inference latency. Extensive ablation experiments demonstrate the contribution of each architectural component, and the learned attention patterns align with known emotion-related neural activity. These findings collectively highlight EMO-CARE’s effectiveness in achieving subject-independent generalization and real-time applicability for EEG-based emotion recognition.
基于脑电图(EEG)的情绪识别有望实现实时心理健康监测、自适应界面和情感计算。然而,由于个体间的可变性和脑电图信号的非平稳性,对个体的准确预测仍然具有挑战性。为了解决这个问题,我们提出了EMO-CARE,这是一个轻量级的深度学习框架,它将多尺度时间卷积网络与在多尺度时间表示上操作的特征级自关注集成在一起。该架构捕获不同神经时间尺度上的情感模式,同时根据它们的相关性自适应地加权多尺度时间特征。在三个基准数据集(SEED, SEED- v和dream)上严格的Leave-One-Subject-Out (LOSO)协议下进行评估,EMO-CARE达到了最先进的精度和低推理延迟。广泛的消融实验证明了每个结构组件的贡献,并且学习到的注意模式与已知的情绪相关神经活动一致。这些发现共同强调了EMO-CARE在实现基于脑电图的情绪识别的受试者独立泛化和实时适用性方面的有效性。
{"title":"EMO-CARE: EEG Multi-Scale Temporal Modeling With Channel-Aware Feature Attention for Robust Subject-Independent Emotion Recognition","authors":"Yeganeh Abdollahinejad;Ahmad Mousavi;Petros Siaplaouras;Zois Boukouvalas;Roberto Corizzo","doi":"10.1109/OJCS.2026.3653766","DOIUrl":"https://doi.org/10.1109/OJCS.2026.3653766","url":null,"abstract":"Electroencephalography (EEG)-based emotion recognition holds promise for real-time mental health monitoring, adaptive interfaces, and affective computing. However, accurate prediction across individuals remains challenging due to inter-subject variability and the non-stationary nature of EEG signals. To address this, we propose EMO-CARE, a lightweight deep learning framework that integrates multi-scale temporal convolutional networks with feature-level self-attention operating on multi-scale temporal representations. This architecture captures emotional patterns across diverse neural timescales while adaptively weighting multi-scale temporal features based on their relevance. Evaluated under the rigorous Leave-One-Subject-Out (LOSO) protocol on three benchmark datasets: SEED, SEED-V, and DREAMER, EMO-CARE achieves state-of-the-art accuracy with low inference latency. Extensive ablation experiments demonstrate the contribution of each architectural component, and the learned attention patterns align with known emotion-related neural activity. These findings collectively highlight EMO-CARE’s effectiveness in achieving subject-independent generalization and real-time applicability for EEG-based emotion recognition.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"354-364"},"PeriodicalIF":0.0,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348088","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Copyright Protection: A Comprehensive Survey of Digital Watermarking, Deep Learning, and Blockchain Approaches 图像版权保护:数字水印、深度学习和区块链方法的综合研究
Pub Date : 2026-01-12 DOI: 10.1109/OJCS.2026.3651292
Phuc Nguyen;Tan Hanh;Truong Duy Dinh;Trong Thua Huynh
Images have become a strategic digital asset that powers creative industries, e–commerce, and data–driven services. However, modern editing tools and large–scale sharing platforms have made copyright infringement, unauthorized redistribution, and covert manipulation easier to perpetrate and harder to detect. These risks lead to financial losses and weaken trust in digital ecosystems, creating an urgent need for technical protections that complement legal remedies. This paper presents a comprehensive survey of technologies and approaches for image copyright protection, with a particular emphasis on digital watermarking, deep learning-based methods, and blockchain-enabled frameworks. We systematically examine the principles, mechanisms, and applications of these techniques, evaluating their strengths, limitations, and potential synergies. In addition, we explore how these technologies can be effectively integrated into practical systems for secure, reliable, and scalable copyright protection of images. Finally, we identify existing challenges and propose promising future research directions to advance the state of the art in image copyright protection.
图像已经成为一种战略性数字资产,为创意产业、电子商务和数据驱动服务提供动力。然而,现代编辑工具和大规模分享平台使得侵犯版权、未经授权的再分发和秘密操纵更容易实施,也更难被发现。这些风险导致经济损失,削弱对数字生态系统的信任,迫切需要技术保护来补充法律补救措施。本文对图像版权保护的技术和方法进行了全面调查,特别强调了数字水印、基于深度学习的方法和支持区块链的框架。我们系统地研究了这些技术的原理、机制和应用,评估了它们的优势、局限性和潜在的协同作用。此外,我们探讨了如何将这些技术有效地集成到实际系统中,以实现安全、可靠和可扩展的图像版权保护。最后,我们指出了现有的挑战,并提出了有希望的未来研究方向,以推动图像版权保护的发展。
{"title":"Image Copyright Protection: A Comprehensive Survey of Digital Watermarking, Deep Learning, and Blockchain Approaches","authors":"Phuc Nguyen;Tan Hanh;Truong Duy Dinh;Trong Thua Huynh","doi":"10.1109/OJCS.2026.3651292","DOIUrl":"https://doi.org/10.1109/OJCS.2026.3651292","url":null,"abstract":"Images have become a strategic digital asset that powers creative industries, e–commerce, and data–driven services. However, modern editing tools and large–scale sharing platforms have made copyright infringement, unauthorized redistribution, and covert manipulation easier to perpetrate and harder to detect. These risks lead to financial losses and weaken trust in digital ecosystems, creating an urgent need for technical protections that complement legal remedies. This paper presents a comprehensive survey of technologies and approaches for image copyright protection, with a particular emphasis on digital watermarking, deep learning-based methods, and blockchain-enabled frameworks. We systematically examine the principles, mechanisms, and applications of these techniques, evaluating their strengths, limitations, and potential synergies. In addition, we explore how these technologies can be effectively integrated into practical systems for secure, reliable, and scalable copyright protection of images. Finally, we identify existing challenges and propose promising future research directions to advance the state of the art in image copyright protection.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"244-263"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339926","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CryptoMamba-SSM: Linear Complexity State Space Models for Cryptocurrency Volatility Prediction CryptoMamba-SSM:用于加密货币波动预测的线性复杂性状态空间模型
Pub Date : 2026-01-12 DOI: 10.1109/OJCS.2026.3651226
Xiuyuan Zhao;Jingyi Liu;Ying Wang;Jiyuan Wang
Cryptocurrency markets exhibit complex microstructural dynamics characterized by high-frequency volatility bursts, rapid regime switching, and long-range temporal dependencies, which expose several limitations of existing volatility forecasting approaches. In particular, attention-based models suffer from prohibitive quadratic computational cost on long high-frequency sequences, while many recurrent architectures struggle to adapt to regime transitions, asymmetric volatility responses, and risk-aware uncertainty estimation. To address these gaps, this paper proposes CryptoMamba-SSM, a novel volatility prediction framework built upon Mamba-based state space models with linear computational complexity. CryptoMamba-SSM integrates selective memory mechanisms with structured state space representations to effectively capture critical market microstructure signals arising from liquidity shocks and sentiment transitions, while dynamically adjusting memory retention across different volatility regimes. This design enables efficient modeling of long-sequence dependencies inherent in cryptocurrency price movements without incurring the computational bottlenecks of traditional attention-based architectures. Through comprehensive experiments on Bitcoin historical data spanning multiple market regimes, we demonstrate that CryptoMamba-SSM consistently outperforms conventional LSTM, GRU, and Transformer baselines, achieving up to a 23.7% reduction in Mean Absolute Error and a 31.2% improvement in directional accuracy. The selective memory mechanism effectively captures regime-switching behaviors and microstructural anomalies, leading to more reliable short-term volatility risk quantification. Moreover, the linear-time complexity of CryptoMamba-SSM enables real-time processing of high-frequency trading data while maintaining strong generalization across diverse market conditions.
加密货币市场表现出复杂的微观结构动态,其特征是高频波动爆发、快速状态切换和长期时间依赖性,这暴露了现有波动率预测方法的一些局限性。特别是,基于注意力的模型在长高频序列上的二次计算成本过高,而许多循环架构难以适应状态转换、不对称波动响应和风险感知的不确定性估计。为了解决这些差距,本文提出了CryptoMamba-SSM,这是一种基于线性计算复杂度的mamba状态空间模型的新型波动率预测框架。CryptoMamba-SSM集成了选择性记忆机制和结构化状态空间表示,以有效捕获由流动性冲击和情绪转变引起的关键市场微观结构信号,同时动态调整不同波动机制下的记忆保留。这种设计能够有效地建模加密货币价格波动中固有的长序列依赖关系,而不会产生传统基于注意力的架构的计算瓶颈。通过对跨越多个市场机制的比特币历史数据的综合实验,我们证明了CryptoMamba-SSM始终优于传统的LSTM, GRU和Transformer基线,平均绝对误差降低23.7%,方向精度提高31.2%。选择性记忆机制有效捕获状态切换行为和微观结构异常,从而实现更可靠的短期波动风险量化。此外,CryptoMamba-SSM的线性时间复杂性使高频交易数据能够实时处理,同时在不同的市场条件下保持强大的泛化。
{"title":"CryptoMamba-SSM: Linear Complexity State Space Models for Cryptocurrency Volatility Prediction","authors":"Xiuyuan Zhao;Jingyi Liu;Ying Wang;Jiyuan Wang","doi":"10.1109/OJCS.2026.3651226","DOIUrl":"https://doi.org/10.1109/OJCS.2026.3651226","url":null,"abstract":"Cryptocurrency markets exhibit complex microstructural dynamics characterized by high-frequency volatility bursts, rapid regime switching, and long-range temporal dependencies, which expose several limitations of existing volatility forecasting approaches. In particular, attention-based models suffer from prohibitive quadratic computational cost on long high-frequency sequences, while many recurrent architectures struggle to adapt to regime transitions, asymmetric volatility responses, and risk-aware uncertainty estimation. To address these gaps, this paper proposes <bold>CryptoMamba-SSM</b>, a novel volatility prediction framework built upon Mamba-based state space models with linear computational complexity. CryptoMamba-SSM integrates selective memory mechanisms with structured state space representations to effectively capture critical market microstructure signals arising from liquidity shocks and sentiment transitions, while dynamically adjusting memory retention across different volatility regimes. This design enables efficient modeling of long-sequence dependencies inherent in cryptocurrency price movements without incurring the computational bottlenecks of traditional attention-based architectures. Through comprehensive experiments on Bitcoin historical data spanning multiple market regimes, we demonstrate that CryptoMamba-SSM consistently outperforms conventional LSTM, GRU, and Transformer baselines, achieving up to a 23.7% reduction in Mean Absolute Error and a 31.2% improvement in directional accuracy. The selective memory mechanism effectively captures regime-switching behaviors and microstructural anomalies, leading to more reliable short-term volatility risk quantification. Moreover, the linear-time complexity of CryptoMamba-SSM enables real-time processing of high-frequency trading data while maintaining strong generalization across diverse market conditions.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"226-243"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339936","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Task Neural Framework for Unified Alert Processing and Incident Prediction in Enterprise IT Systems 企业IT系统中统一警报处理和事件预测的多任务神经网络框架
Pub Date : 2026-01-12 DOI: 10.1109/OJCS.2026.3651756
Mohammed Saad Javeed;Jannatul Maua;Rahomotul Islam;Mumtahina Ahmed;M. F. Mridha;Md. Jakir Hossen
Effective incident management in modern IT systems requires timely interpretation and routing of alerts generated from diverse sources such as SNMP Traps, Syslog messages, and xMatters notifications. However, conventional frameworks often lack unified processing and intelligent automation, resulting in delayed response and SLA violations. This paper presents an AI-enhanced unified alerting and incident management framework that integrates heterogeneous alert streams via the ServiceNow platform. Leveraging two real-world datasets comprising over 140,000 event records and 24,000 unique incidents, we implement a multi-task deep neural network to jointly predict resolution time, incident priority, and responsible assignment group. The proposed method incorporates temporal feature engineering, trainable embeddings for categorical data, and variational autoencoders for dimensionality reduction. A synthetic alert-source simulation is introduced to mimic real-world alert diversity within the data pipeline. Experimental results demonstrate superior performance over baseline models in all key metrics, validating the effectiveness of the proposed architecture. The framework sets the stage for scalable, automated, and context-aware incident triaging in enterprise IT environments.
现代IT系统中的有效事件管理需要及时解释和路由来自各种来源(如SNMP trap、Syslog消息和xMatters通知)的警报。然而,传统框架往往缺乏统一处理和智能自动化,导致响应延迟和SLA违规。本文提出了一个人工智能增强的统一警报和事件管理框架,该框架通过ServiceNow平台集成了异构警报流。利用两个包含超过140,000条事件记录和24,000个独特事件的真实数据集,我们实现了一个多任务深度神经网络,以共同预测解决时间、事件优先级和责任分配组。该方法结合了时间特征工程、分类数据的可训练嵌入和降维的变分自编码器。在数据管道中引入了一种综合警报源仿真来模拟真实世界的警报多样性。实验结果表明,在所有关键指标上,该模型都优于基线模型,验证了所提出体系结构的有效性。该框架为企业IT环境中的可伸缩、自动化和上下文感知的事件分类奠定了基础。
{"title":"A Multi-Task Neural Framework for Unified Alert Processing and Incident Prediction in Enterprise IT Systems","authors":"Mohammed Saad Javeed;Jannatul Maua;Rahomotul Islam;Mumtahina Ahmed;M. F. Mridha;Md. Jakir Hossen","doi":"10.1109/OJCS.2026.3651756","DOIUrl":"https://doi.org/10.1109/OJCS.2026.3651756","url":null,"abstract":"Effective incident management in modern IT systems requires timely interpretation and routing of alerts generated from diverse sources such as SNMP Traps, Syslog messages, and xMatters notifications. However, conventional frameworks often lack unified processing and intelligent automation, resulting in delayed response and SLA violations. This paper presents an AI-enhanced unified alerting and incident management framework that integrates heterogeneous alert streams via the ServiceNow platform. Leveraging two real-world datasets comprising over 140,000 event records and 24,000 unique incidents, we implement a multi-task deep neural network to jointly predict resolution time, incident priority, and responsible assignment group. The proposed method incorporates temporal feature engineering, trainable embeddings for categorical data, and variational autoencoders for dimensionality reduction. A synthetic alert-source simulation is introduced to mimic real-world alert diversity within the data pipeline. Experimental results demonstrate superior performance over baseline models in all key metrics, validating the effectiveness of the proposed architecture. The framework sets the stage for scalable, automated, and context-aware incident triaging in enterprise IT environments.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"264-275"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339966","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial–Temporal Transformers With Stochastic Time-Warping and Joint-Wise Encoding for Rehabilitation Exercise Assessment 基于随机时间扭曲和联合编码的时空变换康复训练评估
Pub Date : 2026-01-01 DOI: 10.1109/OJCS.2025.3650355
Tanawat Matangkasombut;Wuttipong Kumwilaisak;Chatchawarn Hansakunbuntheung;Nattanun Thatphithakkul
Accurate and objective assessment of rehabilitation exercises is critical for ensuring correct execution and maximizing patient recovery, particularly in unsupervised or home-based settings. Existing deep learning approaches frequently rely on graph-based skeletal representations with predefined topologies, which constrain the discovery of long-range or task-specific joint dependencies and limit adaptability across datasets with varying skeletal definitions. To address these limitations, we propose a Spatial–Temporal Transformer framework that directly models 3D joint position data without requiring an explicit adjacency matrix. The framework incorporates a joint-wise feature encoding and structure embedding mechanism to provide unique representations for each joint, thereby mitigating ambiguities arising from symmetry or overlapping movements. Furthermore, a stochastic time-warping augmentation strategy is introduced to simulate execution speed variations, enhancing robustness to diverse patient movement patterns. By applying small, randomized temporal scaling to local segments while consistently interpolating spatial coordinates within temporal boundaries, this stochastic variation enriches the dataset significantly while preserving the biomechanical patterns. Experimental results on the KIMORE dataset demonstrate that the proposed method reduces mean absolute deviation (MAD) by 67.4 % relative to the current state of the art, while also maintaining strong generalization on the UI-PRMD dataset. The approach is compatible with multiple pose estimation algorithms and acquisition modalities, making it suitable for deployment in real-world telerehabilitation and clinical monitoring applications.
准确和客观的康复训练评估对于确保正确执行和最大限度地恢复患者至关重要,特别是在无监督或家庭环境中。现有的深度学习方法经常依赖于具有预定义拓扑的基于图的骨架表示,这限制了远程或特定于任务的联合依赖关系的发现,并限制了具有不同骨架定义的数据集的适应性。为了解决这些限制,我们提出了一个时空转换器框架,直接建模三维关节位置数据,而不需要显式邻接矩阵。该框架结合了关节特征编码和结构嵌入机制,为每个关节提供独特的表示,从而减轻了对称或重叠运动引起的歧义。此外,引入随机时间扭曲增强策略来模拟执行速度变化,增强对不同患者运动模式的鲁棒性。通过对局部片段应用小的、随机的时间尺度,同时在时间边界内一致地插值空间坐标,这种随机变化显著丰富了数据集,同时保留了生物力学模式。在KIMORE数据集上的实验结果表明,该方法相对于目前的技术水平降低了67.4%的平均绝对偏差(MAD),同时在UI-PRMD数据集上保持了较强的泛化能力。该方法与多种姿态估计算法和采集模式兼容,适用于现实世界的远程康复和临床监测应用。
{"title":"Spatial–Temporal Transformers With Stochastic Time-Warping and Joint-Wise Encoding for Rehabilitation Exercise Assessment","authors":"Tanawat Matangkasombut;Wuttipong Kumwilaisak;Chatchawarn Hansakunbuntheung;Nattanun Thatphithakkul","doi":"10.1109/OJCS.2025.3650355","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3650355","url":null,"abstract":"Accurate and objective assessment of rehabilitation exercises is critical for ensuring correct execution and maximizing patient recovery, particularly in unsupervised or home-based settings. Existing deep learning approaches frequently rely on graph-based skeletal representations with predefined topologies, which constrain the discovery of long-range or task-specific joint dependencies and limit adaptability across datasets with varying skeletal definitions. To address these limitations, we propose a Spatial–Temporal Transformer framework that directly models 3D joint position data without requiring an explicit adjacency matrix. The framework incorporates a joint-wise feature encoding and structure embedding mechanism to provide unique representations for each joint, thereby mitigating ambiguities arising from symmetry or overlapping movements. Furthermore, a stochastic time-warping augmentation strategy is introduced to simulate execution speed variations, enhancing robustness to diverse patient movement patterns. By applying small, randomized temporal scaling to local segments while consistently interpolating spatial coordinates within temporal boundaries, this stochastic variation enriches the dataset significantly while preserving the biomechanical patterns. Experimental results on the KIMORE dataset demonstrate that the proposed method reduces mean absolute deviation (MAD) by 67.4 % relative to the current state of the art, while also maintaining strong generalization on the UI-PRMD dataset. The approach is compatible with multiple pose estimation algorithms and acquisition modalities, making it suitable for deployment in real-world telerehabilitation and clinical monitoring applications.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"190-201"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11321303","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Transferable Adversarial Evasion Attack Detection in IoT and Industrial ADS 物联网和工业防空系统中可转移的对抗性规避攻击检测分析
Pub Date : 2025-12-26 DOI: 10.1109/OJCS.2025.3649157
Iman H. Meskini;Cristina Alcaraz;Rodrigo Roman Castro;Javier Lopez
Anomaly Detection Systems (ADS) are essential in Industrial and Internet of Things (IIoT) environments by identifying equipment failures, environmental anomalies, operational irregularities, and cyberattacks. However, the increasing reliance on Machine Learning and Deep Learning (DL) exposes ADS to adversarial attacks, particularly transferable evasion attacks, where Adversarial Examples (AE) crafted for one model can deceive others. Despite their importance, limited research has examined the transferability of adversarial attacks in industrial and IoT contexts or the effectiveness of defense strategies against them. This work systematically evaluates the transferability of adversarial evasion attacks across six ADS models, including both tree-based and neural network architectures, trained on industrial and IIoT scenarios datasets. We also analyze multiple adversarial detection methods, measuring not only their performance, but also their computational efficiency in terms of execution time, processor utilization, and energy consumption. Our results show that most ADS are vulnerable to transferable evasion attacks and that existing detection methods fail in model- and attack-agnostic settings. We further demonstrate that incorporating adversarial learning with a small set of low-perturbation examples significantly improves detection while maintaining low computational overhead, enabling practical and efficient real-time deployment.
异常检测系统(ADS)通过识别设备故障、环境异常、操作违规和网络攻击,在工业和物联网(IIoT)环境中至关重要。然而,对机器学习和深度学习(DL)的日益依赖使ADS面临对抗性攻击,特别是可转移的逃避攻击,其中为一个模型制作的对抗性示例(AE)可以欺骗其他模型。尽管它们很重要,但有限的研究已经检查了工业和物联网环境中对抗性攻击的可转移性或防御策略的有效性。这项工作系统地评估了六种ADS模型的对抗性规避攻击的可转移性,包括基于树的和神经网络架构,在工业和工业物联网场景数据集上进行了训练。我们还分析了多种对抗性检测方法,不仅测量了它们的性能,还测量了它们在执行时间、处理器利用率和能耗方面的计算效率。我们的研究结果表明,大多数ADS容易受到可转移的逃避攻击,现有的检测方法在模型和攻击无关的设置中失败。我们进一步证明,将对抗学习与一小组低扰动示例相结合可以显着提高检测效果,同时保持较低的计算开销,从而实现实用和高效的实时部署。
{"title":"Analysis of Transferable Adversarial Evasion Attack Detection in IoT and Industrial ADS","authors":"Iman H. Meskini;Cristina Alcaraz;Rodrigo Roman Castro;Javier Lopez","doi":"10.1109/OJCS.2025.3649157","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3649157","url":null,"abstract":"Anomaly Detection Systems (ADS) are essential in Industrial and Internet of Things (IIoT) environments by identifying equipment failures, environmental anomalies, operational irregularities, and cyberattacks. However, the increasing reliance on Machine Learning and Deep Learning (DL) exposes ADS to adversarial attacks, particularly transferable evasion attacks, where Adversarial Examples (AE) crafted for one model can deceive others. Despite their importance, limited research has examined the transferability of adversarial attacks in industrial and IoT contexts or the effectiveness of defense strategies against them. This work systematically evaluates the transferability of <italic>adversarial evasion attacks</i> across six ADS models, including both tree-based and neural network architectures, trained on industrial and IIoT scenarios datasets. We also analyze multiple adversarial detection methods, measuring not only their performance, but also their computational efficiency in terms of execution time, processor utilization, and energy consumption. Our results show that most ADS are vulnerable to transferable evasion attacks and that existing detection methods fail in model- and attack-agnostic settings. We further demonstrate that incorporating adversarial learning with a small set of low-perturbation examples significantly improves detection while maintaining low computational overhead, enabling practical and efficient real-time deployment.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"142-153"},"PeriodicalIF":0.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11316607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Deep Learning and Quantum Optimization Framework for Ransomware Response in Healthcare 医疗保健勒索软件响应的混合深度学习和量子优化框架
Pub Date : 2025-12-26 DOI: 10.1109/OJCS.2025.3648741
Ahsan Ahmed;Md Aktarujjaman;Mohammad Moniruzzaman;Md Shahab Uddin;Arifa Akter Eva;M. F. Mridha;Kyoungyee Kim;Jungpil Shin
Ransomware poses a growing threat to healthcare systems, compromising patient safety, operational continuity, and financial stability. Although machine learning techniques have been widely used for intrusion detection, most approaches do not support real-time, cost-sensitive response planning. In this paper, we propose a hybrid framework that integrates deep learning with quantum optimization to both predict the severity of ransomware infection and recommend optimal recovery strategies. The system employs a multilayer perceptron (MLP) trained on structured ransomware incident data to forecast infection rates, followed by a quantum post-processing module using the Quantum Approximate Optimization Algorithm (QAOA) to minimize operational and financial costs through discrete response decisions. We evaluated the framework on a realistic healthcare ransomware dataset consisting of 5,000 simulated attack scenarios. Our approach achieves a root mean squared error (RMSE) of 0.073 in the prediction of infection rates and demonstrates up to 25% cost savings over classical heuristics in recovery decisions. Extensive experiments confirm the generalizability of the model to unseen attack types, the scalability across data volumes, and the stability of the decision in risk contexts. The proposed method represents a step toward intelligent real-time ransomware mitigation systems for high-risk environments such as healthcare.
勒索软件对医疗保健系统构成越来越大的威胁,危及患者安全、运营连续性和财务稳定性。尽管机器学习技术已广泛用于入侵检测,但大多数方法不支持实时、成本敏感的响应计划。在本文中,我们提出了一个混合框架,将深度学习与量子优化相结合,既可以预测勒索软件感染的严重程度,又可以推荐最佳恢复策略。该系统采用一个多层感知器(MLP),对结构化勒索软件事件数据进行训练,以预测感染率,然后使用量子近似优化算法(QAOA)的量子后处理模块,通过离散响应决策将运营和财务成本降至最低。我们在一个由5000个模拟攻击场景组成的现实医疗勒索软件数据集上评估了该框架。我们的方法在预测感染率方面的均方根误差(RMSE)为0.073,在恢复决策方面比传统的启发式方法节省了高达25%的成本。大量的实验证实了该模型对未知攻击类型的泛化性、跨数据量的可扩展性以及风险环境中决策的稳定性。所提出的方法代表了面向高风险环境(如医疗保健)的智能实时勒索软件缓解系统的一步。
{"title":"A Hybrid Deep Learning and Quantum Optimization Framework for Ransomware Response in Healthcare","authors":"Ahsan Ahmed;Md Aktarujjaman;Mohammad Moniruzzaman;Md Shahab Uddin;Arifa Akter Eva;M. F. Mridha;Kyoungyee Kim;Jungpil Shin","doi":"10.1109/OJCS.2025.3648741","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3648741","url":null,"abstract":"Ransomware poses a growing threat to healthcare systems, compromising patient safety, operational continuity, and financial stability. Although machine learning techniques have been widely used for intrusion detection, most approaches do not support real-time, cost-sensitive response planning. In this paper, we propose a hybrid framework that integrates deep learning with quantum optimization to both predict the severity of ransomware infection and recommend optimal recovery strategies. The system employs a multilayer perceptron (MLP) trained on structured ransomware incident data to forecast infection rates, followed by a quantum post-processing module using the Quantum Approximate Optimization Algorithm (QAOA) to minimize operational and financial costs through discrete response decisions. We evaluated the framework on a realistic healthcare ransomware dataset consisting of 5,000 simulated attack scenarios. Our approach achieves a root mean squared error (RMSE) of 0.073 in the prediction of infection rates and demonstrates up to 25% cost savings over classical heuristics in recovery decisions. Extensive experiments confirm the generalizability of the model to unseen attack types, the scalability across data volumes, and the stability of the decision in risk contexts. The proposed method represents a step toward intelligent real-time ransomware mitigation systems for high-risk environments such as healthcare.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"7 ","pages":"154-165"},"PeriodicalIF":0.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11316401","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Open Journal of the Computer Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1