首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
OD-H-SABE: A Hierarchical Searchable Attribute-Based Encryption Scheme With Outsourced Decryption for Blockchain-Based Data Sharing OD-H-SABE:基于区块链的数据共享的分层可搜索属性加密方案和外包解密
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-11 DOI: 10.1002/cpe.70510
Gaimei Gao, Yiqing Wei, Jingyue Wang, Chunxia Liu, Junji Li

The growing demand for data sharing in domains such as health care highlights limitations in existing solutions, including low search efficiency, coarse-grained access control, and heavy decryption overhead on users. To address these challenges, this paper proposes a hierarchical searchable attribute-based encryption scheme with outsourced decryption for blockchain-based data sharing (OD-H-SABE). OD-H-SABE introduces a hierarchical attribute structure alongside an outsourced decryption mechanism that offloads computationally intensive bilinear operations to the cloud server. Consequently, users only need to perform a single lightweight operation to complete decryption, significantly alleviating the computational burden. Furthermore, the scheme integrates searchable encryption with multi-keyword aggregate hashing, enabling efficient search with constant complexity regardless of the number of keywords. Leveraging the transparency and immutability of blockchain, smart contracts verify the integrity of results returned from the cloud server, ensuring data security and trustworthiness throughout the sharing process. Theoretical and experimental analyses demonstrate that OD-H-SABE achieves notable advantages over traditional schemes in terms of security, search efficiency, and computational overhead. For example, compared to MKS-VABE and BEM-ABSE, OD-H-SABE reduces encryption and user-side decryption overhead by approximately 20% and 42%, respectively. This makes it a practical and lightweight solution for constructing secure and efficient blockchain-based data-sharing platforms.

医疗保健等领域对数据共享的需求日益增长,这突出了现有解决方案的局限性,包括搜索效率低、粗粒度访问控制和用户繁重的解密开销。为了解决这些挑战,本文提出了一种分层可搜索的基于属性的加密方案,并将解密外包给基于区块链的数据共享(OD-H-SABE)。OD-H-SABE引入了分层属性结构和外包解密机制,将计算密集型双线性操作卸载到云服务器。因此,用户只需要执行一个轻量级的操作就可以完成解密,大大减轻了计算负担。此外,该方案将可搜索加密与多关键字聚合散列相结合,无论关键字数量多少,都能实现具有恒定复杂度的高效搜索。利用区块链的透明性和不可变性,智能合约验证从云服务器返回的结果的完整性,确保整个共享过程中的数据安全性和可信度。理论和实验分析表明,OD-H-SABE在安全性、搜索效率和计算开销方面比传统方案具有显著优势。例如,与MKS-VABE和BEM-ABSE相比,OD-H-SABE分别减少了大约20%和42%的加密和用户端解密开销。这使得它成为构建安全高效的基于区块链的数据共享平台的实用和轻量级解决方案。
{"title":"OD-H-SABE: A Hierarchical Searchable Attribute-Based Encryption Scheme With Outsourced Decryption for Blockchain-Based Data Sharing","authors":"Gaimei Gao,&nbsp;Yiqing Wei,&nbsp;Jingyue Wang,&nbsp;Chunxia Liu,&nbsp;Junji Li","doi":"10.1002/cpe.70510","DOIUrl":"https://doi.org/10.1002/cpe.70510","url":null,"abstract":"<div>\u0000 \u0000 <p>The growing demand for data sharing in domains such as health care highlights limitations in existing solutions, including low search efficiency, coarse-grained access control, and heavy decryption overhead on users. To address these challenges, this paper proposes a hierarchical searchable attribute-based encryption scheme with outsourced decryption for blockchain-based data sharing (OD-H-SABE). OD-H-SABE introduces a hierarchical attribute structure alongside an outsourced decryption mechanism that offloads computationally intensive bilinear operations to the cloud server. Consequently, users only need to perform a single lightweight operation to complete decryption, significantly alleviating the computational burden. Furthermore, the scheme integrates searchable encryption with multi-keyword aggregate hashing, enabling efficient search with constant complexity regardless of the number of keywords. Leveraging the transparency and immutability of blockchain, smart contracts verify the integrity of results returned from the cloud server, ensuring data security and trustworthiness throughout the sharing process. Theoretical and experimental analyses demonstrate that OD-H-SABE achieves notable advantages over traditional schemes in terms of security, search efficiency, and computational overhead. For example, compared to MKS-VABE and BEM-ABSE, OD-H-SABE reduces encryption and user-side decryption overhead by approximately 20% and 42%, respectively. This makes it a practical and lightweight solution for constructing secure and efficient blockchain-based data-sharing platforms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Market Portfolio Optimization via Structure-Aware Deep Reinforcement Learning 基于结构感知深度强化学习的跨市场投资组合优化
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-11 DOI: 10.1002/cpe.70540
Yiliang Qiao, Yan Zhu, Xu Guo

Financial markets exhibit high levels of non-stationarity and structural heterogeneity, which pose significant challenges to reinforcement learning (RL)-based portfolio optimization methods. To address these challenges, this paper proposes a Structure-Aware Deep Reinforcement Learning (SADRL) framework for cross-market portfolio optimization. The proposed framework explicitly models market structural dynamics through a structure encoder that identifies latent market regimes, while a policy learner adapts investment strategies accordingly. This dual-level learning mechanism enables the model to generalize across heterogeneous markets and remain stable under regime shifts. Extensive experiments on multiple cross-market datasets demonstrate that SADRL achieves superior risk-adjusted returns and improved robustness compared with conventional RL-based baselines. These findings highlight the potential of structure-aware learning for developing intelligent and adaptive decision-making systems in financial markets.

金融市场表现出高度的非平稳性和结构异质性,这对基于强化学习(RL)的投资组合优化方法提出了重大挑战。为了解决这些挑战,本文提出了一个结构感知深度强化学习(SADRL)框架,用于跨市场投资组合优化。所提出的框架通过识别潜在市场机制的结构编码器明确地对市场结构动态进行建模,而政策学习者则相应地调整投资策略。这种双层学习机制使模型能够在异质市场中泛化,并在制度变化下保持稳定。在多个跨市场数据集上进行的大量实验表明,与传统的基于rl的基线相比,SADRL获得了更高的风险调整收益和更好的鲁棒性。这些发现突出了结构感知学习在金融市场中开发智能和适应性决策系统的潜力。
{"title":"Cross-Market Portfolio Optimization via Structure-Aware Deep Reinforcement Learning","authors":"Yiliang Qiao,&nbsp;Yan Zhu,&nbsp;Xu Guo","doi":"10.1002/cpe.70540","DOIUrl":"https://doi.org/10.1002/cpe.70540","url":null,"abstract":"<div>\u0000 \u0000 <p>Financial markets exhibit high levels of non-stationarity and structural heterogeneity, which pose significant challenges to reinforcement learning (RL)-based portfolio optimization methods. To address these challenges, this paper proposes a Structure-Aware Deep Reinforcement Learning (SADRL) framework for cross-market portfolio optimization. The proposed framework explicitly models market structural dynamics through a structure encoder that identifies latent market regimes, while a policy learner adapts investment strategies accordingly. This dual-level learning mechanism enables the model to generalize across heterogeneous markets and remain stable under regime shifts. Extensive experiments on multiple cross-market datasets demonstrate that SADRL achieves superior risk-adjusted returns and improved robustness compared with conventional RL-based baselines. These findings highlight the potential of structure-aware learning for developing intelligent and adaptive decision-making systems in financial markets.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145970089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle Detection and Tracking Method for Highway Fog Scene: Fusion Improvement of AG-YOLOv10n and DeepSORT 公路雾景车辆检测与跟踪方法:AG-YOLOv10n与DeepSORT的融合改进
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-07 DOI: 10.1002/cpe.70553
Liu Liqun, Xie Yupeng, Liu Ting

Fog presents a substantial latent hazard to highway traffic safety, significantly impairing drivers' visibility, thereby elevating the risk of high-speed collisions. While driving, occlusion of multiple targets against complex backgrounds diminishes the detection rate of vehicle detectors. Many existing vehicle detection methods depend on bounding box representations for vehicle identification, limiting their capacity to provide accurate localization, particularly in foggy highway conditions. To enable early warnings of preceding vehicles in fog, this article proposes AG-YOLOv10n, a novel vehicle detection method for foggy environments. This approach improves the model's adaptability to fog-induced target features by replacing standard convolutional layers with AKConv and incorporating the GCAM gated convolutional attention module to enhance the extraction of locally salient information, thereby improving vehicle recognition accuracy in fog. Simultaneously, the DeepSORT tracking algorithm is enhanced, with AG-YOLOv10n replacing the traditional Faster R-CNN detector, and combined with the Kalman filter and Hungarian matching mechanism to achieve stable tracking of vehicle targets. The proposed method enhances the accuracy, recall rate, and average precision of the baseline model by 1.4%, 0.6%, and 1.1%, respectively, on the foggy vehicle dataset. The results demonstrate that the proposed method effectively improves detection accuracy, real-time performance, and system robustness while maintaining the model's lightweight nature, which holds significant practical application for highway fog driving safety.

雾对高速公路交通安全构成了巨大的潜在危险,严重影响了驾驶员的能见度,从而增加了高速碰撞的风险。在驾驶过程中,复杂背景下的多目标遮挡降低了车辆检测器的检测率。许多现有的车辆检测方法依赖于边界框表示来进行车辆识别,这限制了它们提供准确定位的能力,特别是在多雾的公路条件下。为了在大雾环境中对前车进行预警,本文提出了一种新的大雾环境车辆检测方法AG-YOLOv10n。该方法通过用AKConv代替标准卷积层,结合GCAM门控卷积注意模块增强局部显著信息的提取,提高模型对雾致目标特征的适应性,从而提高雾中车辆的识别精度。同时,对DeepSORT跟踪算法进行了改进,用AG-YOLOv10n取代了传统的Faster R-CNN检测器,并结合卡尔曼滤波和匈牙利匹配机制,实现了对车辆目标的稳定跟踪。该方法在雾天车辆数据集上,将基准模型的准确率、召回率和平均精度分别提高1.4%、0.6%和1.1%。结果表明,该方法在保持模型轻量化的同时,有效提高了检测精度、实时性和系统鲁棒性,对公路雾天行驶安全具有重要的实际应用价值。
{"title":"Vehicle Detection and Tracking Method for Highway Fog Scene: Fusion Improvement of AG-YOLOv10n and DeepSORT","authors":"Liu Liqun,&nbsp;Xie Yupeng,&nbsp;Liu Ting","doi":"10.1002/cpe.70553","DOIUrl":"https://doi.org/10.1002/cpe.70553","url":null,"abstract":"<div>\u0000 \u0000 <p>Fog presents a substantial latent hazard to highway traffic safety, significantly impairing drivers' visibility, thereby elevating the risk of high-speed collisions. While driving, occlusion of multiple targets against complex backgrounds diminishes the detection rate of vehicle detectors. Many existing vehicle detection methods depend on bounding box representations for vehicle identification, limiting their capacity to provide accurate localization, particularly in foggy highway conditions. To enable early warnings of preceding vehicles in fog, this article proposes AG-YOLOv10n, a novel vehicle detection method for foggy environments. This approach improves the model's adaptability to fog-induced target features by replacing standard convolutional layers with AKConv and incorporating the GCAM gated convolutional attention module to enhance the extraction of locally salient information, thereby improving vehicle recognition accuracy in fog. Simultaneously, the DeepSORT tracking algorithm is enhanced, with AG-YOLOv10n replacing the traditional Faster R-CNN detector, and combined with the Kalman filter and Hungarian matching mechanism to achieve stable tracking of vehicle targets. The proposed method enhances the accuracy, recall rate, and average precision of the baseline model by 1.4%, 0.6%, and 1.1%, respectively, on the foggy vehicle dataset. The results demonstrate that the proposed method effectively improves detection accuracy, real-time performance, and system robustness while maintaining the model's lightweight nature, which holds significant practical application for highway fog driving safety.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Federated Learning for Detecting False Data Injection Attacks in Power Grids 基于个性化联邦学习的电网假数据注入攻击检测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-07 DOI: 10.1002/cpe.70543
Mengwei Lv, Ruijuan Zheng, Junlong Zhu, Yongsheng Dong, Qingtao Wu, Xuhui Zhao

In the context of security protection against false data injection attacks (FDIAs) in power grids, traditional federated learning effectively utilizes decentralized data resources for distributed training and achieves global collaboration. However, during the model aggregation process, it often overlooks or drowns out local sparse key features, significantly increasing the risk of missed detection of specific attack patterns. To address this issue, this paper proposes a personalized detection framework based on federated learning. Initially, the bidirectional transformer detection (BTD) model detection algorithm is deployed on the client side and trained on local data. Subsequently, through personalized federated learning, the client dynamically combines the weights of the global and local models to generate a personalized detection model. The framework employs a collaborative optimization mechanism of “global knowledge sharing and local feature adaptation” to effectively mitigate the feature drowning problem while strictly safeguarding data privacy. Compared to existing methods, this approach significantly enhances detection accuracy and robustness against differentiated attack patterns, thereby establishing a more reliable security defense system for smart grids.

在电网安全防范虚假数据注入攻击的背景下,传统的联邦学习有效地利用分散的数据资源进行分布式训练,实现全局协同。然而,在模型聚合过程中,它经常忽略或淹没局部稀疏的关键特征,大大增加了错过检测特定攻击模式的风险。为了解决这一问题,本文提出了一种基于联邦学习的个性化检测框架。首先,在客户端部署双向变压器检测(BTD)模型检测算法,并对本地数据进行训练。随后,客户端通过个性化的联邦学习,动态结合全局模型和局部模型的权重,生成个性化的检测模型。该框架采用“全局知识共享、局部特征自适应”的协同优化机制,在严格保护数据隐私的同时,有效缓解了特征淹没问题。与现有方法相比,该方法显著提高了对差异化攻击模式的检测精度和鲁棒性,从而为智能电网建立了更加可靠的安全防御体系。
{"title":"Personalized Federated Learning for Detecting False Data Injection Attacks in Power Grids","authors":"Mengwei Lv,&nbsp;Ruijuan Zheng,&nbsp;Junlong Zhu,&nbsp;Yongsheng Dong,&nbsp;Qingtao Wu,&nbsp;Xuhui Zhao","doi":"10.1002/cpe.70543","DOIUrl":"https://doi.org/10.1002/cpe.70543","url":null,"abstract":"<div>\u0000 \u0000 <p>In the context of security protection against false data injection attacks (FDIAs) in power grids, traditional federated learning effectively utilizes decentralized data resources for distributed training and achieves global collaboration. However, during the model aggregation process, it often overlooks or drowns out local sparse key features, significantly increasing the risk of missed detection of specific attack patterns. To address this issue, this paper proposes a personalized detection framework based on federated learning. Initially, the bidirectional transformer detection (BTD) model detection algorithm is deployed on the client side and trained on local data. Subsequently, through personalized federated learning, the client dynamically combines the weights of the global and local models to generate a personalized detection model. The framework employs a collaborative optimization mechanism of “global knowledge sharing and local feature adaptation” to effectively mitigate the feature drowning problem while strictly safeguarding data privacy. Compared to existing methods, this approach significantly enhances detection accuracy and robustness against differentiated attack patterns, thereby establishing a more reliable security defense system for smart grids.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Telecommunication Fraud Detection via Improved Graph Convolution and Bidirectional Temporal Learning With Adaptive Fusion Strategy 基于改进图卷积和双向时间学习的电信欺诈检测与自适应融合策略
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70544
Abdulrahman Mathkar Alotaibi

Telecommunication fraud has escalated in complexity due to evolving adversarial strategies that exploit dynamic communication patterns, multimodal signals, and semantic manipulation. Recent developments in deep learning and graph-based modeling have shown promise; however, existing systems struggle to simultaneously capture temporal dependencies, relational feature structures, and linguistic nuances embedded in modern fraud activities. Addressing these limitations, this study proposes an Improved Graph Convolutional Network–Bidirectional LSTM (IGCN–Bi-LSTM) framework integrated within a unified signal-to-text and multi-perspective feature-engineering pipeline for high-accuracy fraud detection. The system begins by converting raw telecommunication signals into structured textual representations through a CNN-driven signal-to-text encoder, enabling the extraction of temporal–spectral patterns. These sequences are subsequently enriched through a comprehensive feature-engineering module that synthesizes linguistic markers, statistical descriptors, lexical indicators, and semantic embeddings. The hybrid IGCN–Bi-LSTM model then jointly learns higher-order relational dependencies among features and bidirectional temporal patterns, while an adaptive score-level fusion mechanism optimally weights model outputs for robust classification. Experiments were conducted using a high-quality synthetic Fraud Detection Transactions Dataset comprising 50,000 transactions with 21 heterogeneous attributes covering behavioral, contextual, financial, and security-related characteristics. Extensive preprocessing, normalization, and stratified data partitioning ensured reliable training of the hybrid model in a GPU-accelerated environment. The proposed model demonstrated substantial improvements over baseline methods by effectively capturing weakly correlated, high-dimensional features and rare-event patterns. Performance evaluation using precision-recall metrics confirmed the superiority of the IGCN–Bi-LSTM fusion, particularly in highly imbalanced scenarios where conventional accuracy metrics fail to reflect true detection capability.

由于不断发展的利用动态通信模式、多模态信号和语义操纵的对抗策略,电信欺诈的复杂性已经升级。深度学习和基于图的建模的最新发展显示出了希望;然而,现有的系统很难同时捕捉现代欺诈活动中嵌入的时间依赖性、关系特征结构和语言细微差别。针对这些限制,本研究提出了一种改进的图卷积网络双向LSTM (IGCN-Bi-LSTM)框架,该框架集成在统一的信号到文本和多视角特征工程管道中,用于高精度欺诈检测。该系统首先通过cnn驱动的信号到文本编码器将原始电信信号转换为结构化文本表示,从而能够提取时间谱模式。这些序列随后通过综合语言标记、统计描述符、词汇指示符和语义嵌入的综合特征工程模块进行丰富。然后,混合IGCN-Bi-LSTM模型联合学习特征之间的高阶关系依赖关系和双向时间模式,同时自适应分数级融合机制对模型输出进行最优加权,以实现鲁棒分类。实验使用高质量的合成欺诈检测事务数据集进行,该数据集包含50,000个事务,具有21个异构属性,涵盖行为,上下文,财务和安全相关特征。广泛的预处理、规范化和分层数据划分确保了混合模型在gpu加速环境中的可靠训练。该模型通过有效捕获弱相关、高维特征和罕见事件模式,比基线方法有了实质性的改进。使用精度召回指标的性能评估证实了IGCN-Bi-LSTM融合的优越性,特别是在高度不平衡的情况下,传统的精度指标无法反映真实的检测能力。
{"title":"Telecommunication Fraud Detection via Improved Graph Convolution and Bidirectional Temporal Learning With Adaptive Fusion Strategy","authors":"Abdulrahman Mathkar Alotaibi","doi":"10.1002/cpe.70544","DOIUrl":"https://doi.org/10.1002/cpe.70544","url":null,"abstract":"<div>\u0000 \u0000 <p>Telecommunication fraud has escalated in complexity due to evolving adversarial strategies that exploit dynamic communication patterns, multimodal signals, and semantic manipulation. Recent developments in deep learning and graph-based modeling have shown promise; however, existing systems struggle to simultaneously capture temporal dependencies, relational feature structures, and linguistic nuances embedded in modern fraud activities. Addressing these limitations, this study proposes an Improved Graph Convolutional Network–Bidirectional LSTM (IGCN–Bi-LSTM) framework integrated within a unified signal-to-text and multi-perspective feature-engineering pipeline for high-accuracy fraud detection. The system begins by converting raw telecommunication signals into structured textual representations through a CNN-driven signal-to-text encoder, enabling the extraction of temporal–spectral patterns. These sequences are subsequently enriched through a comprehensive feature-engineering module that synthesizes linguistic markers, statistical descriptors, lexical indicators, and semantic embeddings. The hybrid IGCN–Bi-LSTM model then jointly learns higher-order relational dependencies among features and bidirectional temporal patterns, while an adaptive score-level fusion mechanism optimally weights model outputs for robust classification. Experiments were conducted using a high-quality synthetic Fraud Detection Transactions Dataset comprising 50,000 transactions with 21 heterogeneous attributes covering behavioral, contextual, financial, and security-related characteristics. Extensive preprocessing, normalization, and stratified data partitioning ensured reliable training of the hybrid model in a GPU-accelerated environment. The proposed model demonstrated substantial improvements over baseline methods by effectively capturing weakly correlated, high-dimensional features and rare-event patterns. Performance evaluation using precision-recall metrics confirmed the superiority of the IGCN–Bi-LSTM fusion, particularly in highly imbalanced scenarios where conventional accuracy metrics fail to reflect true detection capability.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of Multicore Processor Soft Error Reliability Using BBRO-DNN and SSF-FIS Models 基于BBRO-DNN和SSF-FIS模型的多核处理器软错误可靠性评估
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70525
Usha Jadhav, P. Malathi

The development of virtual platform frameworks has made it possible to perform early soft error analysis of more realistic multicore systems, that is, real software stacks and state-of-the-art ISAs. Because of the underlying frameworks' strong observability and simulation performance, more error/failure related data may be generated and collected in a reasonable amount of time even with complicated software stack setups. Parameters (i.e., features) that do not directly connect to the system soft error analysis must be filtered away when working with sizable failure-related data sets that come from several fault campaigns. In this regard, the paper proposes an assessment of multicore processor soft error reliability using BBRO-DNN and SSF-FIS models. At first, source code is converted into the executable code using LLVM compiler and applied over the Gem 5 virtual platform. Then, faults are injected into the fault injection module of the virtual platform. Profiling module analysis the faults and the reaction of the system and submits the report. The fault report is given into the proposed BBRO-DNN model for classifying the fault type. Finally, the system's reliability is evaluated using classified fault type. Experimental results are done by comparing the proposed and existing models to show the superiority of the developed model.

虚拟平台框架的开发使得对更现实的多核系统(即真实的软件堆栈和最先进的isa)进行早期软错误分析成为可能。由于底层框架强大的可观察性和仿真性能,即使使用复杂的软件堆栈设置,也可以在合理的时间内生成和收集更多与错误/故障相关的数据。当处理来自多个故障活动的大量与故障相关的数据集时,不直接连接到系统软错误分析的参数(即特征)必须过滤掉。在这方面,本文提出了使用BBRO-DNN和SSF-FIS模型评估多核处理器软错误可靠性的方法。首先,使用LLVM编译器将源代码转换为可执行代码,并在Gem 5虚拟平台上应用。然后,将故障注入虚拟平台的故障注入模块。分析模块分析系统的故障和反应,并提交报告。将故障报告输入到所提出的BBRO-DNN模型中,对故障类型进行分类。最后,采用故障分类方法对系统的可靠性进行了评估。实验结果表明,所建模型与现有模型的比较表明了所建模型的优越性。
{"title":"Assessment of Multicore Processor Soft Error Reliability Using BBRO-DNN and SSF-FIS Models","authors":"Usha Jadhav,&nbsp;P. Malathi","doi":"10.1002/cpe.70525","DOIUrl":"https://doi.org/10.1002/cpe.70525","url":null,"abstract":"<div>\u0000 \u0000 <p>The development of virtual platform frameworks has made it possible to perform early soft error analysis of more realistic multicore systems, that is, real software stacks and state-of-the-art ISAs. Because of the underlying frameworks' strong observability and simulation performance, more error/failure related data may be generated and collected in a reasonable amount of time even with complicated software stack setups. Parameters (i.e., features) that do not directly connect to the system soft error analysis must be filtered away when working with sizable failure-related data sets that come from several fault campaigns. In this regard, the paper proposes an assessment of multicore processor soft error reliability using BBRO-DNN and SSF-FIS models. At first, source code is converted into the executable code using LLVM compiler and applied over the Gem 5 virtual platform. Then, faults are injected into the fault injection module of the virtual platform. Profiling module analysis the faults and the reaction of the system and submits the report. The fault report is given into the proposed BBRO-DNN model for classifying the fault type. Finally, the system's reliability is evaluated using classified fault type. Experimental results are done by comparing the proposed and existing models to show the superiority of the developed model.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cross-Chain Architecture for Interoperable and Trusted Multi-Party Collaboration 面向可互操作和可信多方协作的跨链架构
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70531
Ou Wu, Yang Wang, Bocong Zhao, Chaoran Luo

Cross-chain interoperability is a focal point in blockchain research. However, existing efforts predominantly concentrate on functional realization, such as asset transfer and message relaying, while often overlooking the critical dimension of performance. The effective implementation of these functions faces significant performance challenges, including data verification overhead, traceability delays, and inflexible contract execution. Precisely addressing this gap, this paper proposes a performance-optimized, relay-based architecture. Three dedicated core modules are introduced to address these performance bottlenecks: A Shared Data Life-cycle Management Module for efficient data governance, a Real-time Cross-chain Traceability Module for low-latency tracking, and a Dynamic Smart Contract Management Module for agile cross-chain logic execution. Implemented on BitXHub, our system demonstrates superior performance, successfully processing 937 out of 1000 transactions and achieving a latency of 6.7 ms under 800 concurrent requests. The framework's practical effectiveness is further validated through deployments in a cross-border seafood supply chain and a multi-party Deoxyribonucleic Acid (DNA) data sharing network, proving its value as a high-performance solution for complex real-world applications.

跨链互操作性是区块链研究的一个热点。然而,现有的工作主要集中在功能实现上,比如资产转移和消息传递,而常常忽略了性能的关键维度。这些功能的有效实现面临着重大的性能挑战,包括数据验证开销、可跟踪性延迟和不灵活的合同执行。为了解决这一问题,本文提出了一种性能优化的、基于继电器的体系结构。引入了三个专用核心模块来解决这些性能瓶颈:用于高效数据治理的共享数据生命周期管理模块,用于低延迟跟踪的实时跨链可追溯性模块,以及用于敏捷跨链逻辑执行的动态智能合约管理模块。在BitXHub上实现后,我们的系统表现出了卓越的性能,在800个并发请求下,成功处理了1000个事务中的937个,延迟时间为6.7 ms。通过在跨境海产品供应链和多方脱氧核糖核酸(DNA)数据共享网络中的部署,进一步验证了该框架的实际有效性,证明了其作为复杂实际应用的高性能解决方案的价值。
{"title":"A Cross-Chain Architecture for Interoperable and Trusted Multi-Party Collaboration","authors":"Ou Wu,&nbsp;Yang Wang,&nbsp;Bocong Zhao,&nbsp;Chaoran Luo","doi":"10.1002/cpe.70531","DOIUrl":"https://doi.org/10.1002/cpe.70531","url":null,"abstract":"<div>\u0000 \u0000 <p>Cross-chain interoperability is a focal point in blockchain research. However, existing efforts predominantly concentrate on functional realization, such as asset transfer and message relaying, while often overlooking the critical dimension of performance. The effective implementation of these functions faces significant performance challenges, including data verification overhead, traceability delays, and inflexible contract execution. Precisely addressing this gap, this paper proposes a performance-optimized, relay-based architecture. Three dedicated core modules are introduced to address these performance bottlenecks: A Shared Data Life-cycle Management Module for efficient data governance, a Real-time Cross-chain Traceability Module for low-latency tracking, and a Dynamic Smart Contract Management Module for agile cross-chain logic execution. Implemented on BitXHub, our system demonstrates superior performance, successfully processing 937 out of 1000 transactions and achieving a latency of 6.7 ms under 800 concurrent requests. The framework's practical effectiveness is further validated through deployments in a cross-border seafood supply chain and a multi-party Deoxyribonucleic Acid (DNA) data sharing network, proving its value as a high-performance solution for complex real-world applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Short-Term Electricity Load Forecasting Based on PCA-PSO-Kmeans++ Clustering and Improved DSC 基于pca - pso - kmeans++聚类和改进DSC的短期电力负荷预测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70549
Xue Zhu, Zhao Zhang, Hongyan Zhou, Xue-Bo Chen

Electricity load data is usually significantly trending, cyclical, and stochastic in nature; it is also influenced by external factors such as weather, holidays, and socioeconomic activities. In addition, electricity loads usually have long-time dependencies, such as daily, weekly, and yearly periodicity. To address these challenges, we propose an electricity load forecasting method that combines PCA-PSO-Kmeans++ clustering with improved depthwise separable convolution. This article consists of two parts: data processing and electricity load forecasting. The data processing part of this includes seasonal decomposition of raw data, the Pearson correlation coefficient to select suitable exogenous variables, manual feature processing of raw electricity load data, and cluster analysis of feature-processed data. The electricity load forecasting part of the model is trained using an improved depthwise separable convolution that incorporates an attention mechanism and residual connection. Electricity load forecasting on datasets from the US and Nordic region. Experimental results show that clustering combined with improved depthwise separable convolution is more accurate and reliable in electricity load forecasting. Based on experimental results, we quantify the performance gain contributed by clustering relative to the strongest non-clustered baseline and demonstrate that clustering combined with the improved DSC further enhances accuracy.

电力负荷数据通常具有明显的趋势、周期性和随机性;它还受到天气、假期和社会经济活动等外部因素的影响。此外,电力负荷通常具有较长的依赖关系,如每日、每周和每年的周期性。为了解决这些挑战,我们提出了一种将pca - pso - kmeans++聚类与改进的深度可分离卷积相结合的电力负荷预测方法。本文主要包括数据处理和电力负荷预测两部分。其中数据处理部分包括对原始数据进行季节分解,利用Pearson相关系数选择合适的外生变量,对原始电力负荷数据进行人工特征处理,对特征处理后的数据进行聚类分析。该模型的电力负荷预测部分使用改进的深度可分离卷积进行训练,该卷积结合了注意机制和剩余连接。基于美国和北欧地区数据集的电力负荷预测。实验结果表明,聚类与改进的深度可分离卷积相结合,在电力负荷预测中具有更高的准确性和可靠性。基于实验结果,我们量化了聚类相对于最强非聚类基线所贡献的性能增益,并证明聚类与改进的DSC相结合进一步提高了准确性。
{"title":"Short-Term Electricity Load Forecasting Based on PCA-PSO-Kmeans++ Clustering and Improved DSC","authors":"Xue Zhu,&nbsp;Zhao Zhang,&nbsp;Hongyan Zhou,&nbsp;Xue-Bo Chen","doi":"10.1002/cpe.70549","DOIUrl":"https://doi.org/10.1002/cpe.70549","url":null,"abstract":"<div>\u0000 \u0000 <p>Electricity load data is usually significantly trending, cyclical, and stochastic in nature; it is also influenced by external factors such as weather, holidays, and socioeconomic activities. In addition, electricity loads usually have long-time dependencies, such as daily, weekly, and yearly periodicity. To address these challenges, we propose an electricity load forecasting method that combines PCA-PSO-Kmeans++ clustering with improved depthwise separable convolution. This article consists of two parts: data processing and electricity load forecasting. The data processing part of this includes seasonal decomposition of raw data, the Pearson correlation coefficient to select suitable exogenous variables, manual feature processing of raw electricity load data, and cluster analysis of feature-processed data. The electricity load forecasting part of the model is trained using an improved depthwise separable convolution that incorporates an attention mechanism and residual connection. Electricity load forecasting on datasets from the US and Nordic region. Experimental results show that clustering combined with improved depthwise separable convolution is more accurate and reliable in electricity load forecasting. Based on experimental results, we quantify the performance gain contributed by clustering relative to the strongest non-clustered baseline and demonstrate that clustering combined with the improved DSC further enhances accuracy.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Machine Learning Framework for Predicting Spider Silk Toughness and Tensile Strength From Physicochemical and Genetic Features 从物理化学和遗传特征预测蜘蛛丝韧性和拉伸强度的可解释机器学习框架
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70533
Omid Mirzaei, Ahmet Ilhan, Boran Sekeroglu

Spider silk has exceptional mechanical properties, notably its high toughness, which is a measure of a material's ability to absorb energy before failure. Predicting toughness using physical, biochemical, and genetic features is a challenging task due to the nonlinear and multivariate interactions involved. This study presents a comprehensive machine learning framework to predict the toughness and the tensile strength of spider silk fibers using interpretable and high-performing models. A curated dataset with varied physicochemical and structural features was used to train, tune, and evaluate multiple machine learning models, including Decision Tree, support vector machines, Random Forest, Gradient Boosting, and XGBoost. Feature engineering steps introduced domain-specific constructs such as a toughness proxy and modulus transformations. Hyperparameter tuning was conducted via Bayesian optimization to enhance model performance. Among all tested models, the tuned XGBoost regressor achieved the highest predictive accuracy (R2=0.855$$ {R}^2=0.855 $$ and 0.765), outperforming all other models. Feature importance analysis highlighted several key predictors among Young's modulus, aligning with known biological mechanisms for both toughness and tensile strength. This work demonstrates how machine learning can be used not only for accurate prediction but also to uncover the underlying determinants of silk toughness. The proposed framework sets the stage for data-driven design of bioinspired synthetic fibers and represents a significant step toward the computational modeling of high-performance biomaterials.

蜘蛛丝具有特殊的机械性能,特别是它的高韧性,这是一种材料在失效前吸收能量的能力的衡量标准。由于涉及非线性和多元相互作用,利用物理、生化和遗传特征预测韧性是一项具有挑战性的任务。本研究提出了一个全面的机器学习框架,使用可解释和高性能的模型来预测蜘蛛丝纤维的韧性和拉伸强度。使用具有不同物理化学和结构特征的精心策划的数据集来训练,调整和评估多种机器学习模型,包括决策树,支持向量机,随机森林,梯度增强和XGBoost。特征工程步骤引入了特定于领域的构造,如韧性代理和模量转换。通过贝叶斯优化进行超参数整定,提高模型性能。在所有测试模型中,调整后的XGBoost回归量的预测精度最高(r2 = 0)。855 $$ {R}^2=0.855 $$和0.765),优于所有其他模型。特征重要性分析突出了杨氏模量中的几个关键预测因子,与已知的韧性和抗拉强度的生物机制一致。这项工作证明了机器学习不仅可以用于准确预测,还可以用于揭示丝绸韧性的潜在决定因素。提出的框架为生物合成纤维的数据驱动设计奠定了基础,并代表了高性能生物材料计算建模的重要一步。
{"title":"Interpretable Machine Learning Framework for Predicting Spider Silk Toughness and Tensile Strength From Physicochemical and Genetic Features","authors":"Omid Mirzaei,&nbsp;Ahmet Ilhan,&nbsp;Boran Sekeroglu","doi":"10.1002/cpe.70533","DOIUrl":"https://doi.org/10.1002/cpe.70533","url":null,"abstract":"<div>\u0000 \u0000 <p>Spider silk has exceptional mechanical properties, notably its high toughness, which is a measure of a material's ability to absorb energy before failure. Predicting toughness using physical, biochemical, and genetic features is a challenging task due to the nonlinear and multivariate interactions involved. This study presents a comprehensive machine learning framework to predict the toughness and the tensile strength of spider silk fibers using interpretable and high-performing models. A curated dataset with varied physicochemical and structural features was used to train, tune, and evaluate multiple machine learning models, including Decision Tree, support vector machines, Random Forest, Gradient Boosting, and XGBoost. Feature engineering steps introduced domain-specific constructs such as a toughness proxy and modulus transformations. Hyperparameter tuning was conducted via Bayesian optimization to enhance model performance. Among all tested models, the tuned XGBoost regressor achieved the highest predictive accuracy (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mrow>\u0000 <mi>R</mi>\u0000 </mrow>\u0000 <mrow>\u0000 <mn>2</mn>\u0000 </mrow>\u0000 </msup>\u0000 <mo>=</mo>\u0000 <mn>0</mn>\u0000 <mo>.</mo>\u0000 <mn>855</mn>\u0000 </mrow>\u0000 <annotation>$$ {R}^2=0.855 $$</annotation>\u0000 </semantics></math> and 0.765), outperforming all other models. Feature importance analysis highlighted several key predictors among Young's modulus, aligning with known biological mechanisms for both toughness and tensile strength. This work demonstrates how machine learning can be used not only for accurate prediction but also to uncover the underlying determinants of silk toughness. The proposed framework sets the stage for data-driven design of bioinspired synthetic fibers and represents a significant step toward the computational modeling of high-performance biomaterials.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Global to Local: A Dependency and Semantic Integration-Based Document-Level Biomedical Relation Extraction Method 从全局到局部:一种基于依赖和语义集成的文档级生物医学关系提取方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70551
Bin Zhou, Qingchuan Xu, Kai Che, Longbo Zhang, Hongzhen Cai, Linlin Xing

Document level biomedical relation extraction aims to identify complex relationships between entity pairs in biomedical literature, which is crucial for the automation of medical knowledge applications. Existing methods face limitations when handling non-local and multi-layered semantic dependencies, making it difficult to effectively integrate global semantics with local interactions. The goal of this study is to propose a novel model to address this issue and enhance the ability to model complex dependencies. This paper proposes a new model that combines global dependency graphs with multi-level semantic information graphs (DMK). By utilizing a dual-graph collaborative mechanism, it integrates document-level contextual information to accurately model complex dependencies between entities. We introduce the KanChebConv convolutional layer based on the Kolmogorov–Arnold Network (KAN), replacing traditional linear weight matrices with learnable spline functions, thereby enhancing the model's ability to capture non-linear dependencies. We evaluated our model on the chemical–disease relation (CDR) dataset and the gene–disease relation (GDA) dataset. The results demonstrate that our model achieved the highest F1 score among the selected baselines on both datasets, thereby validating its robustness and competitiveness. Through the collaborative mechanism of global and local information and the innovative KAN convolutional layer, our model effectively improves the accuracy and robustness of document-level biomedical relation extraction, showcasing strong potential for practical applications.

文档级生物医学关系抽取旨在识别生物医学文献中实体对之间的复杂关系,这对医学知识应用的自动化至关重要。现有方法在处理非局部和多层语义依赖时存在局限性,难以有效地将全局语义与局部交互集成在一起。本研究的目标是提出一种新的模型来解决这个问题,并增强对复杂依赖关系建模的能力。本文提出了一种将全局依赖图与多级语义信息图(DMK)相结合的模型。通过利用双图协作机制,它集成了文档级上下文信息,以准确地建模实体之间复杂的依赖关系。我们引入了基于Kolmogorov-Arnold网络(KAN)的KanChebConv卷积层,用可学习的样条函数取代了传统的线性权矩阵,从而增强了模型捕获非线性依赖关系的能力。我们在化学-疾病关系(CDR)数据集和基因-疾病关系(GDA)数据集上评估了我们的模型。结果表明,我们的模型在两个数据集上的选定基线中获得了最高的F1分数,从而验证了其稳健性和竞争力。该模型通过全局和局部信息的协同机制以及创新的KAN卷积层,有效提高了文档级生物医学关系提取的准确性和鲁棒性,具有很强的实际应用潜力。
{"title":"From Global to Local: A Dependency and Semantic Integration-Based Document-Level Biomedical Relation Extraction Method","authors":"Bin Zhou,&nbsp;Qingchuan Xu,&nbsp;Kai Che,&nbsp;Longbo Zhang,&nbsp;Hongzhen Cai,&nbsp;Linlin Xing","doi":"10.1002/cpe.70551","DOIUrl":"https://doi.org/10.1002/cpe.70551","url":null,"abstract":"<div>\u0000 \u0000 <p>Document level biomedical relation extraction aims to identify complex relationships between entity pairs in biomedical literature, which is crucial for the automation of medical knowledge applications. Existing methods face limitations when handling non-local and multi-layered semantic dependencies, making it difficult to effectively integrate global semantics with local interactions. The goal of this study is to propose a novel model to address this issue and enhance the ability to model complex dependencies. This paper proposes a new model that combines global dependency graphs with multi-level semantic information graphs (DMK). By utilizing a dual-graph collaborative mechanism, it integrates document-level contextual information to accurately model complex dependencies between entities. We introduce the KanChebConv convolutional layer based on the Kolmogorov–Arnold Network (KAN), replacing traditional linear weight matrices with learnable spline functions, thereby enhancing the model's ability to capture non-linear dependencies. We evaluated our model on the chemical–disease relation (CDR) dataset and the gene–disease relation (GDA) dataset. The results demonstrate that our model achieved the highest F1 score among the selected baselines on both datasets, thereby validating its robustness and competitiveness. Through the collaborative mechanism of global and local information and the innovative KAN convolutional layer, our model effectively improves the accuracy and robustness of document-level biomedical relation extraction, showcasing strong potential for practical applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1