首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Assessment of Multicore Processor Soft Error Reliability Using BBRO-DNN and SSF-FIS Models 基于BBRO-DNN和SSF-FIS模型的多核处理器软错误可靠性评估
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70525
Usha Jadhav, P. Malathi

The development of virtual platform frameworks has made it possible to perform early soft error analysis of more realistic multicore systems, that is, real software stacks and state-of-the-art ISAs. Because of the underlying frameworks' strong observability and simulation performance, more error/failure related data may be generated and collected in a reasonable amount of time even with complicated software stack setups. Parameters (i.e., features) that do not directly connect to the system soft error analysis must be filtered away when working with sizable failure-related data sets that come from several fault campaigns. In this regard, the paper proposes an assessment of multicore processor soft error reliability using BBRO-DNN and SSF-FIS models. At first, source code is converted into the executable code using LLVM compiler and applied over the Gem 5 virtual platform. Then, faults are injected into the fault injection module of the virtual platform. Profiling module analysis the faults and the reaction of the system and submits the report. The fault report is given into the proposed BBRO-DNN model for classifying the fault type. Finally, the system's reliability is evaluated using classified fault type. Experimental results are done by comparing the proposed and existing models to show the superiority of the developed model.

虚拟平台框架的开发使得对更现实的多核系统(即真实的软件堆栈和最先进的isa)进行早期软错误分析成为可能。由于底层框架强大的可观察性和仿真性能,即使使用复杂的软件堆栈设置,也可以在合理的时间内生成和收集更多与错误/故障相关的数据。当处理来自多个故障活动的大量与故障相关的数据集时,不直接连接到系统软错误分析的参数(即特征)必须过滤掉。在这方面,本文提出了使用BBRO-DNN和SSF-FIS模型评估多核处理器软错误可靠性的方法。首先,使用LLVM编译器将源代码转换为可执行代码,并在Gem 5虚拟平台上应用。然后,将故障注入虚拟平台的故障注入模块。分析模块分析系统的故障和反应,并提交报告。将故障报告输入到所提出的BBRO-DNN模型中,对故障类型进行分类。最后,采用故障分类方法对系统的可靠性进行了评估。实验结果表明,所建模型与现有模型的比较表明了所建模型的优越性。
{"title":"Assessment of Multicore Processor Soft Error Reliability Using BBRO-DNN and SSF-FIS Models","authors":"Usha Jadhav,&nbsp;P. Malathi","doi":"10.1002/cpe.70525","DOIUrl":"https://doi.org/10.1002/cpe.70525","url":null,"abstract":"<div>\u0000 \u0000 <p>The development of virtual platform frameworks has made it possible to perform early soft error analysis of more realistic multicore systems, that is, real software stacks and state-of-the-art ISAs. Because of the underlying frameworks' strong observability and simulation performance, more error/failure related data may be generated and collected in a reasonable amount of time even with complicated software stack setups. Parameters (i.e., features) that do not directly connect to the system soft error analysis must be filtered away when working with sizable failure-related data sets that come from several fault campaigns. In this regard, the paper proposes an assessment of multicore processor soft error reliability using BBRO-DNN and SSF-FIS models. At first, source code is converted into the executable code using LLVM compiler and applied over the Gem 5 virtual platform. Then, faults are injected into the fault injection module of the virtual platform. Profiling module analysis the faults and the reaction of the system and submits the report. The fault report is given into the proposed BBRO-DNN model for classifying the fault type. Finally, the system's reliability is evaluated using classified fault type. Experimental results are done by comparing the proposed and existing models to show the superiority of the developed model.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cross-Chain Architecture for Interoperable and Trusted Multi-Party Collaboration 面向可互操作和可信多方协作的跨链架构
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70531
Ou Wu, Yang Wang, Bocong Zhao, Chaoran Luo

Cross-chain interoperability is a focal point in blockchain research. However, existing efforts predominantly concentrate on functional realization, such as asset transfer and message relaying, while often overlooking the critical dimension of performance. The effective implementation of these functions faces significant performance challenges, including data verification overhead, traceability delays, and inflexible contract execution. Precisely addressing this gap, this paper proposes a performance-optimized, relay-based architecture. Three dedicated core modules are introduced to address these performance bottlenecks: A Shared Data Life-cycle Management Module for efficient data governance, a Real-time Cross-chain Traceability Module for low-latency tracking, and a Dynamic Smart Contract Management Module for agile cross-chain logic execution. Implemented on BitXHub, our system demonstrates superior performance, successfully processing 937 out of 1000 transactions and achieving a latency of 6.7 ms under 800 concurrent requests. The framework's practical effectiveness is further validated through deployments in a cross-border seafood supply chain and a multi-party Deoxyribonucleic Acid (DNA) data sharing network, proving its value as a high-performance solution for complex real-world applications.

跨链互操作性是区块链研究的一个热点。然而,现有的工作主要集中在功能实现上,比如资产转移和消息传递,而常常忽略了性能的关键维度。这些功能的有效实现面临着重大的性能挑战,包括数据验证开销、可跟踪性延迟和不灵活的合同执行。为了解决这一问题,本文提出了一种性能优化的、基于继电器的体系结构。引入了三个专用核心模块来解决这些性能瓶颈:用于高效数据治理的共享数据生命周期管理模块,用于低延迟跟踪的实时跨链可追溯性模块,以及用于敏捷跨链逻辑执行的动态智能合约管理模块。在BitXHub上实现后,我们的系统表现出了卓越的性能,在800个并发请求下,成功处理了1000个事务中的937个,延迟时间为6.7 ms。通过在跨境海产品供应链和多方脱氧核糖核酸(DNA)数据共享网络中的部署,进一步验证了该框架的实际有效性,证明了其作为复杂实际应用的高性能解决方案的价值。
{"title":"A Cross-Chain Architecture for Interoperable and Trusted Multi-Party Collaboration","authors":"Ou Wu,&nbsp;Yang Wang,&nbsp;Bocong Zhao,&nbsp;Chaoran Luo","doi":"10.1002/cpe.70531","DOIUrl":"https://doi.org/10.1002/cpe.70531","url":null,"abstract":"<div>\u0000 \u0000 <p>Cross-chain interoperability is a focal point in blockchain research. However, existing efforts predominantly concentrate on functional realization, such as asset transfer and message relaying, while often overlooking the critical dimension of performance. The effective implementation of these functions faces significant performance challenges, including data verification overhead, traceability delays, and inflexible contract execution. Precisely addressing this gap, this paper proposes a performance-optimized, relay-based architecture. Three dedicated core modules are introduced to address these performance bottlenecks: A Shared Data Life-cycle Management Module for efficient data governance, a Real-time Cross-chain Traceability Module for low-latency tracking, and a Dynamic Smart Contract Management Module for agile cross-chain logic execution. Implemented on BitXHub, our system demonstrates superior performance, successfully processing 937 out of 1000 transactions and achieving a latency of 6.7 ms under 800 concurrent requests. The framework's practical effectiveness is further validated through deployments in a cross-border seafood supply chain and a multi-party Deoxyribonucleic Acid (DNA) data sharing network, proving its value as a high-performance solution for complex real-world applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Short-Term Electricity Load Forecasting Based on PCA-PSO-Kmeans++ Clustering and Improved DSC 基于pca - pso - kmeans++聚类和改进DSC的短期电力负荷预测
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70549
Xue Zhu, Zhao Zhang, Hongyan Zhou, Xue-Bo Chen

Electricity load data is usually significantly trending, cyclical, and stochastic in nature; it is also influenced by external factors such as weather, holidays, and socioeconomic activities. In addition, electricity loads usually have long-time dependencies, such as daily, weekly, and yearly periodicity. To address these challenges, we propose an electricity load forecasting method that combines PCA-PSO-Kmeans++ clustering with improved depthwise separable convolution. This article consists of two parts: data processing and electricity load forecasting. The data processing part of this includes seasonal decomposition of raw data, the Pearson correlation coefficient to select suitable exogenous variables, manual feature processing of raw electricity load data, and cluster analysis of feature-processed data. The electricity load forecasting part of the model is trained using an improved depthwise separable convolution that incorporates an attention mechanism and residual connection. Electricity load forecasting on datasets from the US and Nordic region. Experimental results show that clustering combined with improved depthwise separable convolution is more accurate and reliable in electricity load forecasting. Based on experimental results, we quantify the performance gain contributed by clustering relative to the strongest non-clustered baseline and demonstrate that clustering combined with the improved DSC further enhances accuracy.

电力负荷数据通常具有明显的趋势、周期性和随机性;它还受到天气、假期和社会经济活动等外部因素的影响。此外,电力负荷通常具有较长的依赖关系,如每日、每周和每年的周期性。为了解决这些挑战,我们提出了一种将pca - pso - kmeans++聚类与改进的深度可分离卷积相结合的电力负荷预测方法。本文主要包括数据处理和电力负荷预测两部分。其中数据处理部分包括对原始数据进行季节分解,利用Pearson相关系数选择合适的外生变量,对原始电力负荷数据进行人工特征处理,对特征处理后的数据进行聚类分析。该模型的电力负荷预测部分使用改进的深度可分离卷积进行训练,该卷积结合了注意机制和剩余连接。基于美国和北欧地区数据集的电力负荷预测。实验结果表明,聚类与改进的深度可分离卷积相结合,在电力负荷预测中具有更高的准确性和可靠性。基于实验结果,我们量化了聚类相对于最强非聚类基线所贡献的性能增益,并证明聚类与改进的DSC相结合进一步提高了准确性。
{"title":"Short-Term Electricity Load Forecasting Based on PCA-PSO-Kmeans++ Clustering and Improved DSC","authors":"Xue Zhu,&nbsp;Zhao Zhang,&nbsp;Hongyan Zhou,&nbsp;Xue-Bo Chen","doi":"10.1002/cpe.70549","DOIUrl":"https://doi.org/10.1002/cpe.70549","url":null,"abstract":"<div>\u0000 \u0000 <p>Electricity load data is usually significantly trending, cyclical, and stochastic in nature; it is also influenced by external factors such as weather, holidays, and socioeconomic activities. In addition, electricity loads usually have long-time dependencies, such as daily, weekly, and yearly periodicity. To address these challenges, we propose an electricity load forecasting method that combines PCA-PSO-Kmeans++ clustering with improved depthwise separable convolution. This article consists of two parts: data processing and electricity load forecasting. The data processing part of this includes seasonal decomposition of raw data, the Pearson correlation coefficient to select suitable exogenous variables, manual feature processing of raw electricity load data, and cluster analysis of feature-processed data. The electricity load forecasting part of the model is trained using an improved depthwise separable convolution that incorporates an attention mechanism and residual connection. Electricity load forecasting on datasets from the US and Nordic region. Experimental results show that clustering combined with improved depthwise separable convolution is more accurate and reliable in electricity load forecasting. Based on experimental results, we quantify the performance gain contributed by clustering relative to the strongest non-clustered baseline and demonstrate that clustering combined with the improved DSC further enhances accuracy.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Machine Learning Framework for Predicting Spider Silk Toughness and Tensile Strength From Physicochemical and Genetic Features 从物理化学和遗传特征预测蜘蛛丝韧性和拉伸强度的可解释机器学习框架
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70533
Omid Mirzaei, Ahmet Ilhan, Boran Sekeroglu

Spider silk has exceptional mechanical properties, notably its high toughness, which is a measure of a material's ability to absorb energy before failure. Predicting toughness using physical, biochemical, and genetic features is a challenging task due to the nonlinear and multivariate interactions involved. This study presents a comprehensive machine learning framework to predict the toughness and the tensile strength of spider silk fibers using interpretable and high-performing models. A curated dataset with varied physicochemical and structural features was used to train, tune, and evaluate multiple machine learning models, including Decision Tree, support vector machines, Random Forest, Gradient Boosting, and XGBoost. Feature engineering steps introduced domain-specific constructs such as a toughness proxy and modulus transformations. Hyperparameter tuning was conducted via Bayesian optimization to enhance model performance. Among all tested models, the tuned XGBoost regressor achieved the highest predictive accuracy (R2=0.855$$ {R}^2=0.855 $$ and 0.765), outperforming all other models. Feature importance analysis highlighted several key predictors among Young's modulus, aligning with known biological mechanisms for both toughness and tensile strength. This work demonstrates how machine learning can be used not only for accurate prediction but also to uncover the underlying determinants of silk toughness. The proposed framework sets the stage for data-driven design of bioinspired synthetic fibers and represents a significant step toward the computational modeling of high-performance biomaterials.

蜘蛛丝具有特殊的机械性能,特别是它的高韧性,这是一种材料在失效前吸收能量的能力的衡量标准。由于涉及非线性和多元相互作用,利用物理、生化和遗传特征预测韧性是一项具有挑战性的任务。本研究提出了一个全面的机器学习框架,使用可解释和高性能的模型来预测蜘蛛丝纤维的韧性和拉伸强度。使用具有不同物理化学和结构特征的精心策划的数据集来训练,调整和评估多种机器学习模型,包括决策树,支持向量机,随机森林,梯度增强和XGBoost。特征工程步骤引入了特定于领域的构造,如韧性代理和模量转换。通过贝叶斯优化进行超参数整定,提高模型性能。在所有测试模型中,调整后的XGBoost回归量的预测精度最高(r2 = 0)。855 $$ {R}^2=0.855 $$和0.765),优于所有其他模型。特征重要性分析突出了杨氏模量中的几个关键预测因子,与已知的韧性和抗拉强度的生物机制一致。这项工作证明了机器学习不仅可以用于准确预测,还可以用于揭示丝绸韧性的潜在决定因素。提出的框架为生物合成纤维的数据驱动设计奠定了基础,并代表了高性能生物材料计算建模的重要一步。
{"title":"Interpretable Machine Learning Framework for Predicting Spider Silk Toughness and Tensile Strength From Physicochemical and Genetic Features","authors":"Omid Mirzaei,&nbsp;Ahmet Ilhan,&nbsp;Boran Sekeroglu","doi":"10.1002/cpe.70533","DOIUrl":"https://doi.org/10.1002/cpe.70533","url":null,"abstract":"<div>\u0000 \u0000 <p>Spider silk has exceptional mechanical properties, notably its high toughness, which is a measure of a material's ability to absorb energy before failure. Predicting toughness using physical, biochemical, and genetic features is a challenging task due to the nonlinear and multivariate interactions involved. This study presents a comprehensive machine learning framework to predict the toughness and the tensile strength of spider silk fibers using interpretable and high-performing models. A curated dataset with varied physicochemical and structural features was used to train, tune, and evaluate multiple machine learning models, including Decision Tree, support vector machines, Random Forest, Gradient Boosting, and XGBoost. Feature engineering steps introduced domain-specific constructs such as a toughness proxy and modulus transformations. Hyperparameter tuning was conducted via Bayesian optimization to enhance model performance. Among all tested models, the tuned XGBoost regressor achieved the highest predictive accuracy (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mrow>\u0000 <mi>R</mi>\u0000 </mrow>\u0000 <mrow>\u0000 <mn>2</mn>\u0000 </mrow>\u0000 </msup>\u0000 <mo>=</mo>\u0000 <mn>0</mn>\u0000 <mo>.</mo>\u0000 <mn>855</mn>\u0000 </mrow>\u0000 <annotation>$$ {R}^2=0.855 $$</annotation>\u0000 </semantics></math> and 0.765), outperforming all other models. Feature importance analysis highlighted several key predictors among Young's modulus, aligning with known biological mechanisms for both toughness and tensile strength. This work demonstrates how machine learning can be used not only for accurate prediction but also to uncover the underlying determinants of silk toughness. The proposed framework sets the stage for data-driven design of bioinspired synthetic fibers and represents a significant step toward the computational modeling of high-performance biomaterials.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Global to Local: A Dependency and Semantic Integration-Based Document-Level Biomedical Relation Extraction Method 从全局到局部:一种基于依赖和语义集成的文档级生物医学关系提取方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1002/cpe.70551
Bin Zhou, Qingchuan Xu, Kai Che, Longbo Zhang, Hongzhen Cai, Linlin Xing

Document level biomedical relation extraction aims to identify complex relationships between entity pairs in biomedical literature, which is crucial for the automation of medical knowledge applications. Existing methods face limitations when handling non-local and multi-layered semantic dependencies, making it difficult to effectively integrate global semantics with local interactions. The goal of this study is to propose a novel model to address this issue and enhance the ability to model complex dependencies. This paper proposes a new model that combines global dependency graphs with multi-level semantic information graphs (DMK). By utilizing a dual-graph collaborative mechanism, it integrates document-level contextual information to accurately model complex dependencies between entities. We introduce the KanChebConv convolutional layer based on the Kolmogorov–Arnold Network (KAN), replacing traditional linear weight matrices with learnable spline functions, thereby enhancing the model's ability to capture non-linear dependencies. We evaluated our model on the chemical–disease relation (CDR) dataset and the gene–disease relation (GDA) dataset. The results demonstrate that our model achieved the highest F1 score among the selected baselines on both datasets, thereby validating its robustness and competitiveness. Through the collaborative mechanism of global and local information and the innovative KAN convolutional layer, our model effectively improves the accuracy and robustness of document-level biomedical relation extraction, showcasing strong potential for practical applications.

文档级生物医学关系抽取旨在识别生物医学文献中实体对之间的复杂关系,这对医学知识应用的自动化至关重要。现有方法在处理非局部和多层语义依赖时存在局限性,难以有效地将全局语义与局部交互集成在一起。本研究的目标是提出一种新的模型来解决这个问题,并增强对复杂依赖关系建模的能力。本文提出了一种将全局依赖图与多级语义信息图(DMK)相结合的模型。通过利用双图协作机制,它集成了文档级上下文信息,以准确地建模实体之间复杂的依赖关系。我们引入了基于Kolmogorov-Arnold网络(KAN)的KanChebConv卷积层,用可学习的样条函数取代了传统的线性权矩阵,从而增强了模型捕获非线性依赖关系的能力。我们在化学-疾病关系(CDR)数据集和基因-疾病关系(GDA)数据集上评估了我们的模型。结果表明,我们的模型在两个数据集上的选定基线中获得了最高的F1分数,从而验证了其稳健性和竞争力。该模型通过全局和局部信息的协同机制以及创新的KAN卷积层,有效提高了文档级生物医学关系提取的准确性和鲁棒性,具有很强的实际应用潜力。
{"title":"From Global to Local: A Dependency and Semantic Integration-Based Document-Level Biomedical Relation Extraction Method","authors":"Bin Zhou,&nbsp;Qingchuan Xu,&nbsp;Kai Che,&nbsp;Longbo Zhang,&nbsp;Hongzhen Cai,&nbsp;Linlin Xing","doi":"10.1002/cpe.70551","DOIUrl":"https://doi.org/10.1002/cpe.70551","url":null,"abstract":"<div>\u0000 \u0000 <p>Document level biomedical relation extraction aims to identify complex relationships between entity pairs in biomedical literature, which is crucial for the automation of medical knowledge applications. Existing methods face limitations when handling non-local and multi-layered semantic dependencies, making it difficult to effectively integrate global semantics with local interactions. The goal of this study is to propose a novel model to address this issue and enhance the ability to model complex dependencies. This paper proposes a new model that combines global dependency graphs with multi-level semantic information graphs (DMK). By utilizing a dual-graph collaborative mechanism, it integrates document-level contextual information to accurately model complex dependencies between entities. We introduce the KanChebConv convolutional layer based on the Kolmogorov–Arnold Network (KAN), replacing traditional linear weight matrices with learnable spline functions, thereby enhancing the model's ability to capture non-linear dependencies. We evaluated our model on the chemical–disease relation (CDR) dataset and the gene–disease relation (GDA) dataset. The results demonstrate that our model achieved the highest F1 score among the selected baselines on both datasets, thereby validating its robustness and competitiveness. Through the collaborative mechanism of global and local information and the innovative KAN convolutional layer, our model effectively improves the accuracy and robustness of document-level biomedical relation extraction, showcasing strong potential for practical applications.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Workload-Tailored Optimization of Job Scheduling Policies in HPC Environments 面向高性能计算环境的作业调度策略优化
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-04 DOI: 10.1002/cpe.70532
João Pedro M. N. dos Santos, José Eduardo Henriques da Silva, Antônio Tadeu A. Gomes

Supercomputers play a pivotal role in advancing research and development across diverse scientific and engineering domains. However, configuring job scheduling in these systems to ensure maximum productivity and cost-effectiveness is a challenge. Workload simulation emerges as a crucial tool in this context, offering a mechanism to explore job scheduling configurations in the presence of expected user behaviors. In this paper, we focus on simulation-based optimization applied to tuning job scheduling configurations. We introduce a discrete-event simulator that utilizes two strategies to accommodate real workload traces under varying job scheduling policies: Job shaping and job splitting. Our findings from evaluating the proposed strategies on a real-world case study suggest that they allow the effective accommodation of the real workload traces used as input to the simulation of incompatible policies. By plugging the simulator into an evolutionary optimization algorithm, we also demonstrate the flexibility of the proposed strategies in helping with the proper exploration of the job scheduling configuration space.

超级计算机在推动不同科学和工程领域的研究和发展方面发挥着关键作用。然而,在这些系统中配置作业调度以确保最大的生产力和成本效益是一个挑战。在这种情况下,工作负载模拟成为一个关键工具,它提供了一种机制,可以在预期用户行为存在的情况下探索作业调度配置。在本文中,我们着重于基于仿真的优化应用于调优作业调度配置。我们介绍了一个离散事件模拟器,它利用两种策略来适应不同作业调度策略下的实际工作负载跟踪:作业塑造和作业拆分。我们在实际案例研究中评估建议的策略的结果表明,它们允许有效地适应用作不兼容策略模拟输入的实际工作负载跟踪。通过将模拟器插入到进化优化算法中,我们还展示了所提出策略在帮助正确探索作业调度配置空间方面的灵活性。
{"title":"Towards Workload-Tailored Optimization of Job Scheduling Policies in HPC Environments","authors":"João Pedro M. N. dos Santos,&nbsp;José Eduardo Henriques da Silva,&nbsp;Antônio Tadeu A. Gomes","doi":"10.1002/cpe.70532","DOIUrl":"https://doi.org/10.1002/cpe.70532","url":null,"abstract":"<p>Supercomputers play a pivotal role in advancing research and development across diverse scientific and engineering domains. However, configuring job scheduling in these systems to ensure maximum productivity and cost-effectiveness is a challenge. Workload simulation emerges as a crucial tool in this context, offering a mechanism to explore job scheduling configurations in the presence of expected user behaviors. In this paper, we focus on simulation-based optimization applied to tuning job scheduling configurations. We introduce a discrete-event simulator that utilizes two strategies to accommodate real workload traces under varying job scheduling policies: Job shaping and job splitting. Our findings from evaluating the proposed strategies on a real-world case study suggest that they allow the effective accommodation of the real workload traces used as input to the simulation of incompatible policies. By plugging the simulator into an evolutionary optimization algorithm, we also demonstrate the flexibility of the proposed strategies in helping with the proper exploration of the job scheduling configuration space.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70532","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Road Optimization Using UAVs for Disaster Areas 基于深度学习的无人机灾区道路优化
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-02 DOI: 10.1002/cpe.70539
Mehmet Serhat Ceylan, Gül Fatma Türker

Disaster management, disaster preparedness, disaster risk reduction, and post-disaster recovery processes are of strategic importance today in terms of increasing their effectiveness. Technological innovations and tools play an important role in the effective implementation of these processes. The rapid development of artificial intelligence and UAV technologies has enabled their effective use in disaster situations, and the integration of artificial intelligence has increased the competence of these tools. In this study, the detection of open roads and the optimization of the shortest route to reach the target were carried out in the context of disaster management. For this purpose, computer vision techniques were used to detect the condition of roads using image data obtained from UAVs, and the shortest route to reach the target point was determined. A unique dataset representing the disaster situation was created for model training. The image segmentation process was performed using current YOLO models. In the shortest route optimization phase, Dijkstra, A*, BFS, and DFS algorithms were applied. As a result of comparing the models developed in the route finding process, it was determined that YOLOv9e-seg provided the fastest and most accurate results with an average processing speed of 624 ms and a mAP value of 84.4%. Among the shortest path algorithms, Dijkstra and A* were found to provide the fastest access to the target point with average times of 385 and 387 ms, respectively. These results demonstrate that the developed model is successful in accurately determining the shortest path using a UAV in earthquake-affected areas.

灾害管理、备灾、减少灾害风险和灾后恢复过程在提高其有效性方面具有战略重要性。技术创新和工具在有效实施这些进程方面发挥着重要作用。人工智能和无人机技术的快速发展使其能够在灾害情况下有效使用,人工智能的集成提高了这些工具的能力。在本研究中,在灾害管理的背景下进行了开放道路的检测和到达目标的最短路线的优化。为此,利用计算机视觉技术,利用无人机获取的图像数据检测道路状况,确定到达目标点的最短路径。为模型训练创建了一个代表灾难情况的唯一数据集。图像分割过程使用当前的YOLO模型进行。在最短路径优化阶段,采用Dijkstra、A*、BFS和DFS算法。通过对在寻路过程中建立的模型进行比较,确定YOLOv9e-seg提供最快和最准确的结果,平均处理速度为624 ms, mAP值为84.4%。在最短路径算法中,Dijkstra算法和A*算法到达目标点的速度最快,平均时间分别为385 ms和387 ms。结果表明,所建立的模型能够成功地利用无人机在地震灾区准确地确定最短路径。
{"title":"Deep Learning-Based Road Optimization Using UAVs for Disaster Areas","authors":"Mehmet Serhat Ceylan,&nbsp;Gül Fatma Türker","doi":"10.1002/cpe.70539","DOIUrl":"https://doi.org/10.1002/cpe.70539","url":null,"abstract":"<div>\u0000 \u0000 <p>Disaster management, disaster preparedness, disaster risk reduction, and post-disaster recovery processes are of strategic importance today in terms of increasing their effectiveness. Technological innovations and tools play an important role in the effective implementation of these processes. The rapid development of artificial intelligence and UAV technologies has enabled their effective use in disaster situations, and the integration of artificial intelligence has increased the competence of these tools. In this study, the detection of open roads and the optimization of the shortest route to reach the target were carried out in the context of disaster management. For this purpose, computer vision techniques were used to detect the condition of roads using image data obtained from UAVs, and the shortest route to reach the target point was determined. A unique dataset representing the disaster situation was created for model training. The image segmentation process was performed using current YOLO models. In the shortest route optimization phase, Dijkstra, A*, BFS, and DFS algorithms were applied. As a result of comparing the models developed in the route finding process, it was determined that YOLOv9e-seg provided the fastest and most accurate results with an average processing speed of 624 ms and a mAP value of 84.4%. Among the shortest path algorithms, Dijkstra and A* were found to provide the fastest access to the target point with average times of 385 and 387 ms, respectively. These results demonstrate that the developed model is successful in accurately determining the shortest path using a UAV in earthquake-affected areas.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PLRAC: A PUF Characteristic Based Lightweight Remote Attestation for Container PLRAC:基于PUF特性的容器轻量级远程认证
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-02 DOI: 10.1002/cpe.70534
XinFeng He, Tanxin Zou

Container-based cloud technology has been widely used in the digital transformation of enterprises, and the combination of cloud computing and container technology has achieved efficient resource management. However, container technology also has security weaknesses and vulnerabilities and is vulnerable to cyberattacks. Most of the existing security studies focus on specific vulnerabilities and subsystems and fail to provide reliable verification of the overall security of container environments. To solve this problem, the concept of container-oriented PUF (CPUF) is proposed, which draws upon the characteristics of PUF. By integrating the PCR value from TPM with container attributes and encapsulating them through TEE, a unique and hardware-secure container identity is generated, which enables container integrity verification. Simultaneously, a lightweight remote attestation for container (PLRAC) is proposed in this paper based on TEE and CPUF as the foundation of trusted root. By integrating PUF and TEE technology, this scheme achieves a low-cost, high-efficiency remote verification mechanism for container, effectively detecting whether containers have been tampered with or compromised. We formally verified the security of this scheme using the AVISPA tool and, combined with theoretical analysis, demonstrated its resistance to typical attacks such as replay and forgery. Performance evaluations indicate that compared to other authentication schemes, PLRAC reduces communication overhead by up to approximately 21.5% while providing additional security properties, such as anonymity and uniqueness.

基于容器的云技术在企业数字化转型中得到了广泛应用,云计算与容器技术的结合实现了高效的资源管理。然而,容器技术也存在安全弱点和漏洞,容易受到网络攻击。现有的安全研究大多集中在特定的漏洞和子系统上,无法对容器环境的整体安全性提供可靠的验证。为了解决这一问题,借鉴了面向容器的PUF的特点,提出了面向容器的PUF的概念。通过将来自TPM的PCR值与容器属性集成,并通过TEE封装它们,可以生成一个惟一的、硬件安全的容器标识,从而实现容器完整性验证。同时,本文提出了一种基于TEE和CPUF作为可信根基础的轻量级容器远程认证(PLRAC)。该方案通过融合PUF和TEE技术,实现了一种低成本、高效率的容器远程验证机制,有效检测容器是否被篡改或泄露。我们使用AVISPA工具正式验证了该方案的安全性,并结合理论分析证明了该方案对重放和伪造等典型攻击的抵抗能力。性能评估表明,与其他身份验证方案相比,PLRAC将通信开销减少了大约21.5%,同时提供了额外的安全属性,例如匿名性和唯一性。
{"title":"PLRAC: A PUF Characteristic Based Lightweight Remote Attestation for Container","authors":"XinFeng He,&nbsp;Tanxin Zou","doi":"10.1002/cpe.70534","DOIUrl":"https://doi.org/10.1002/cpe.70534","url":null,"abstract":"<div>\u0000 \u0000 <p>Container-based cloud technology has been widely used in the digital transformation of enterprises, and the combination of cloud computing and container technology has achieved efficient resource management. However, container technology also has security weaknesses and vulnerabilities and is vulnerable to cyberattacks. Most of the existing security studies focus on specific vulnerabilities and subsystems and fail to provide reliable verification of the overall security of container environments. To solve this problem, the concept of container-oriented PUF (CPUF) is proposed, which draws upon the characteristics of PUF. By integrating the PCR value from TPM with container attributes and encapsulating them through TEE, a unique and hardware-secure container identity is generated, which enables container integrity verification. Simultaneously, a lightweight remote attestation for container (PLRAC) is proposed in this paper based on TEE and CPUF as the foundation of trusted root. By integrating PUF and TEE technology, this scheme achieves a low-cost, high-efficiency remote verification mechanism for container, effectively detecting whether containers have been tampered with or compromised. We formally verified the security of this scheme using the AVISPA tool and, combined with theoretical analysis, demonstrated its resistance to typical attacks such as replay and forgery. Performance evaluations indicate that compared to other authentication schemes, PLRAC reduces communication overhead by up to approximately 21.5% while providing additional security properties, such as anonymity and uniqueness.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Pothole Detection in Complex Environments Using ARCH-RTDETR: A Lightweight and Efficient Approach 基于ARCH-RTDETR的复杂环境凹坑检测:一种轻量级、高效的方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-02 DOI: 10.1002/cpe.70541
Zhihai Liu, Ruijie Liu, Wenhao Sun, Jinfeng Ma

Detecting potholes in complex environments poses challenges such as varying illumination, shadows, and occlusions. Traditional methods often suffer from insufficient detection accuracy and poor real-time performance. To enhance detection robustness without sacrificing inference speed, this paper adopts the RT-DETR (Real-Time Detection Transformer) framework—which requires no NMS (Non-Maximum Suppression) post-processing and features an efficient hybrid encoder—as its foundation. We propose the lightweight and efficient ARCH-RTDETR detection model. The model introduces targeted enhancements to the backbone, feature-fusion module, and multi-scale architecture. Specifically, an AFGCA (Adaptive Fusion Global Context Attention) mechanism strengthens sensitivity to subtle cues; RepBN (Reparameterized Batch Normalization) is deeply integrated into the AIFI (Adaptive Instance Feature Integration) module to optimize feature distributions and increase multi-scale representational capacity; and the proposed CA-HSFPN (Coordinate Attention-guided Hierarchical Scale Feature Pyramid Network) improves the effectiveness of cross-scale feature fusion. Experiments on diverse datasets show that ARCH-RTDETR achieves an average detection accuracy of 85%, outperforming the RT-DETR baseline by 2.9%, while also improving detection precision and inference efficiency. These results indicate strong potential for deployment in intelligent transportation systems. This research provides a technical reference for small object detection, addressing the low efficiency of traditional manual inspections and the high detection latency of existing equipment in intelligent transportation systems, thereby offering a reliable technical solution for road safety assurance.

在复杂的环境中探测坑洼会带来一些挑战,比如不同的光照、阴影和遮挡。传统的检测方法往往存在检测精度不足、实时性差的问题。为了在不牺牲推理速度的情况下增强检测鲁棒性,本文采用RT-DETR(实时检测变压器)框架作为基础,该框架不需要NMS(非最大抑制)后处理,并具有高效的混合编码器。提出了一种轻量级、高效的ARCH-RTDETR检测模型。该模型对主干、特征融合模块和多尺度体系结构进行了有针对性的增强。具体来说,AFGCA (Adaptive Fusion Global Context Attention)机制增强了对微妙线索的敏感性;RepBN (Reparameterized Batch Normalization)深度集成到AIFI (Adaptive Instance Feature Integration)模块中,优化特征分布,增加多尺度表征能力;CA-HSFPN (Coordinate Attention-guided Hierarchical Scale Feature Pyramid Network)提高了跨尺度特征融合的有效性。在不同数据集上的实验表明,ARCH-RTDETR平均检测准确率达到85%,比RT-DETR基线提高2.9%,同时也提高了检测精度和推理效率。这些结果表明在智能交通系统中部署的巨大潜力。本研究为小物体检测提供了技术参考,解决了智能交通系统中传统人工检测效率低、现有设备检测时延高的问题,为道路安全保障提供了可靠的技术解决方案。
{"title":"Enhanced Pothole Detection in Complex Environments Using ARCH-RTDETR: A Lightweight and Efficient Approach","authors":"Zhihai Liu,&nbsp;Ruijie Liu,&nbsp;Wenhao Sun,&nbsp;Jinfeng Ma","doi":"10.1002/cpe.70541","DOIUrl":"https://doi.org/10.1002/cpe.70541","url":null,"abstract":"<div>\u0000 \u0000 <p>Detecting potholes in complex environments poses challenges such as varying illumination, shadows, and occlusions. Traditional methods often suffer from insufficient detection accuracy and poor real-time performance. To enhance detection robustness without sacrificing inference speed, this paper adopts the RT-DETR (Real-Time Detection Transformer) framework—which requires no NMS (Non-Maximum Suppression) post-processing and features an efficient hybrid encoder—as its foundation. We propose the lightweight and efficient ARCH-RTDETR detection model. The model introduces targeted enhancements to the backbone, feature-fusion module, and multi-scale architecture. Specifically, an AFGCA (Adaptive Fusion Global Context Attention) mechanism strengthens sensitivity to subtle cues; RepBN (Reparameterized Batch Normalization) is deeply integrated into the AIFI (Adaptive Instance Feature Integration) module to optimize feature distributions and increase multi-scale representational capacity; and the proposed CA-HSFPN (Coordinate Attention-guided Hierarchical Scale Feature Pyramid Network) improves the effectiveness of cross-scale feature fusion. Experiments on diverse datasets show that ARCH-RTDETR achieves an average detection accuracy of 85%, outperforming the RT-DETR baseline by 2.9%, while also improving detection precision and inference efficiency. These results indicate strong potential for deployment in intelligent transportation systems. This research provides a technical reference for small object detection, addressing the low efficiency of traditional manual inspections and the high detection latency of existing equipment in intelligent transportation systems, thereby offering a reliable technical solution for road safety assurance.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Python/Fortran Implementation of the Lattice-Boltzmann Kernel on Multiple GPU Using the OpenACC Framework 使用OpenACC框架在多GPU上实现Lattice-Boltzmann内核的Python/Fortran实现
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-02 DOI: 10.1002/cpe.70518
Carlos Junqueira-Junior, Erwan Zamora Medina, Noureddine Taibi, Simon Marié

The increasing availability of GPU accelerated architectures for high-performance computing presents new opportunities for scientific software but also challenges due to the complexity of porting legacy codes to accelerator platforms. Directive-based programming models such as OpenACC offer a minimally intrusive pathway to exploit GPU acceleration without requiring extensive rewriting of existing codes. The current work presents a comprehensive performance and portability study of a LatticeBoltzmann Method solver (PyLB) originally written in Python, Mpi4Py, and Fortran for CPU architectures, which is ported to GPUs using OpenACC directives applied to the Fortran routines. The performance of the solver is evaluated on NVIDIA V100, A100, and H100 GPUs available on the Jean Zay supercomputer from Institute for Development and Resources in Intensive Scientific Computing (IDRIS) in France. Roofline analysis and extensive strong and weak scalability tests are conducted, showing that the GPU-enabled version of PyLB scales efficiently across multiple GPUs. The solver achieves performance on the H100 GPU equivalent to thousands of CPU cores and shows strong energy and carbon efficiency advantages over traditional CPU-based simulations. The implementation is validated using classical benchmarks, including the decaying Taylor-Green vortex and the flow over a 3-D sphere. The results confirm the physical accuracy of the GPU port while highlighting its computational and environmental advantages.

用于高性能计算的GPU加速架构的日益可用性为科学软件提供了新的机会,但由于将遗留代码移植到加速器平台的复杂性,也带来了挑战。基于指令的编程模型(如OpenACC)提供了一种侵入性最小的途径来利用GPU加速,而不需要大量重写现有代码。目前的工作提出了一个全面的性能和可移植性研究的晶格玻尔兹曼方法求解器(PyLB),最初写在Python, Mpi4Py,和Fortran的CPU架构,它被移植到gpu使用OpenACC指令应用于Fortran例程。求解器的性能在法国集约科学计算发展与资源研究所(IDRIS) Jean Zay超级计算机上可用的NVIDIA V100, A100和H100 gpu上进行了评估。进行了rooline分析和广泛的强弱可伸缩性测试,表明支持gpu的PyLB版本可以有效地跨多个gpu扩展。求解器在H100 GPU上实现了相当于数千个CPU内核的性能,与传统的基于CPU的模拟相比,显示出强大的能源和碳效率优势。使用经典基准验证了该实现,包括衰减的泰勒-格林漩涡和三维球体上的流动。结果证实了GPU端口的物理精度,同时突出了其计算和环境优势。
{"title":"A Python/Fortran Implementation of the Lattice-Boltzmann Kernel on Multiple GPU Using the OpenACC Framework","authors":"Carlos Junqueira-Junior,&nbsp;Erwan Zamora Medina,&nbsp;Noureddine Taibi,&nbsp;Simon Marié","doi":"10.1002/cpe.70518","DOIUrl":"https://doi.org/10.1002/cpe.70518","url":null,"abstract":"<p>The increasing availability of GPU accelerated architectures for high-performance computing presents new opportunities for scientific software but also challenges due to the complexity of porting legacy codes to accelerator platforms. Directive-based programming models such as OpenACC offer a minimally intrusive pathway to exploit GPU acceleration without requiring extensive rewriting of existing codes. The current work presents a comprehensive performance and portability study of a LatticeBoltzmann Method solver (PyLB) originally written in Python, Mpi4Py, and Fortran for CPU architectures, which is ported to GPUs using OpenACC directives applied to the Fortran routines. The performance of the solver is evaluated on NVIDIA V100, A100, and H100 GPUs available on the Jean Zay supercomputer from Institute for Development and Resources in Intensive Scientific Computing (IDRIS) in France. Roofline analysis and extensive strong and weak scalability tests are conducted, showing that the GPU-enabled version of PyLB scales efficiently across multiple GPUs. The solver achieves performance on the H100 GPU equivalent to thousands of CPU cores and shows strong energy and carbon efficiency advantages over traditional CPU-based simulations. The implementation is validated using classical benchmarks, including the decaying Taylor-Green vortex and the flow over a 3-D sphere. The results confirm the physical accuracy of the GPU port while highlighting its computational and environmental advantages.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"38 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.70518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1