首页 > 最新文献

Array最新文献

英文 中文
Blockchain-IoMT-enabled federated learning: An intelligent privacy-preserving control policy for electronic health records 支持区块链-物联网的联邦学习:电子健康记录的智能隐私保护控制策略
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100586
Munusamy S, Jothi K R
The integration of the Internet of Medical Things (IoMT), blockchain technology, and federated learning can provide a new approach to keeping Electronic Health Records (EHRs) in a decentralized, secure, and privacy-protecting form. This article introduces a novel Blockchain-IoMT-based Federated Learning (FL) system that uses a smart privacy-preserving control method to solve the key problems in EHR administration, including data security, patient privacy, and interoperability. The FL paradigm limits patient data to edge nodes, limiting the opportunities of centralized attacks. Although advanced privacy-sensitive methods, such as differential privacy and homomorphic encryption, ensure that the sensitive data is not exposed to adversarial models during training and communication, blockchain technology allows recording the data immutably and auditing it transparently, as well as decentralizing data access. Experimental evaluation with the Parkinson disease data indicates that the proposed PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy) model is superior to the current practices in accuracy, robustness, and computational efficiency. The results confirm the usefulness of the framework in protecting healthcare data, enabling secure communication among the spread nodes, and setting the stage of scalable and privacy-aware healthcare systems.
医疗物联网(IoMT)、区块链技术和联邦学习的集成可以提供一种新的方法,以分散、安全和隐私保护的形式保存电子健康记录(EHRs)。本文介绍了一种新颖的基于区块链iom的联邦学习(FL)系统,该系统使用智能隐私保护控制方法来解决电子病历管理中的关键问题,包括数据安全、患者隐私和互操作性。FL范例将患者数据限制在边缘节点,限制了集中攻击的机会。尽管先进的隐私敏感方法(如差分隐私和同态加密)确保敏感数据在训练和通信期间不会暴露给敌对模型,但区块链技术允许不可变地记录数据并透明地对其进行审计,以及分散数据访问。基于帕金森病数据的实验评估表明,所提出的PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy)模型在准确性、鲁棒性和计算效率方面优于目前的实践。结果证实了该框架在保护医疗保健数据、支持传播节点之间的安全通信以及为可扩展和隐私敏感的医疗保健系统奠定基础方面的有用性。
{"title":"Blockchain-IoMT-enabled federated learning: An intelligent privacy-preserving control policy for electronic health records","authors":"Munusamy S,&nbsp;Jothi K R","doi":"10.1016/j.array.2025.100586","DOIUrl":"10.1016/j.array.2025.100586","url":null,"abstract":"<div><div>The integration of the Internet of Medical Things (IoMT), blockchain technology, and federated learning can provide a new approach to keeping Electronic Health Records (EHRs) in a decentralized, secure, and privacy-protecting form. This article introduces a novel Blockchain-IoMT-based Federated Learning (FL) system that uses a smart privacy-preserving control method to solve the key problems in EHR administration, including data security, patient privacy, and interoperability. The FL paradigm limits patient data to edge nodes, limiting the opportunities of centralized attacks. Although advanced privacy-sensitive methods, such as differential privacy and homomorphic encryption, ensure that the sensitive data is not exposed to adversarial models during training and communication, blockchain technology allows recording the data immutably and auditing it transparently, as well as decentralizing data access. Experimental evaluation with the Parkinson disease data indicates that the proposed PPFL-ICP (Privacy-Preserving Federated Learning with Intelligent Control Policy) model is superior to the current practices in accuracy, robustness, and computational efficiency. The results confirm the usefulness of the framework in protecting healthcare data, enabling secure communication among the spread nodes, and setting the stage of scalable and privacy-aware healthcare systems.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100586"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative path optimization model of power material supply chain based on hash index spatio-temporal graph neural network 基于哈希指数时空图神经网络的电力材料供应链协同路径优化模型
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100598
Lichong Cui , Huayu Chu , Junsheng Wang , Wei Guo , Fan Yang , Zixi Hu
The daily dispatching of materials in power systems involves multifaceted operations, including data analysis and logistics warehouse management. Current research on intelligent IoT mainly focuses on the static management of electrical materials and isolated dynamic dispatching schemes. It lacks a comprehensive spatio-temporal circulation design throughout the IoT-enabled distribution process. This gap hinders the implementation of efficient allocation mechanisms. This paper considers the coupling relationship between logistics collaborative data and spatio-temporal correlations. Using the Hash Index algorithm, the logistics data are transformed into multi-objective optimization composite functions. The proposed framework integrates Spatio-Temporal Graph Neural Networks (STGNNs) to model spatio-temporal relationships among nodes adjacent to abnormal coordinates in distribution paths. By aggregating information from neighboring collaborative nodes to update node embeddings, the framework leverages the enhanced external functions of multiple adjacent nodes in decision-making processes. This approach effectively resolves optimal path selection challenges under emergency conditions while ensuring global model optimization. Experimental results show that, compared to mainstream graph neural network models, the proposed model reduces path prediction errors by an average of approximately 12.3 %, as measured by Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Moreover, it shortens the path length by 17.6 % in multi-objective collaborative route optimization. These results confirm the model's effectiveness and superiority in routing tasks within the electric power material supply chain. The proposed solution also exhibits notable technical advantages over mainstream approaches. Additionally, it not only ensures operational efficiency in power logistics but also offers technical support for multi-vehicle and multi-station collaborative operations under emergency conditions in the logistics industry.
电力系统物料的日常调度涉及多方面的操作,包括数据分析和物流仓库管理。目前智能物联网的研究主要集中在电气材料的静态管理和孤立的动态调度方案上。在整个物联网配送过程中缺乏全面的时空流通设计。这一差距阻碍了有效分配机制的实施。本文考虑了物流协同数据与时空相关性之间的耦合关系。利用哈希索引算法,将物流数据转化为多目标优化组合函数。该框架集成了时空图神经网络(stgnn)来模拟分布路径中异常坐标相邻节点之间的时空关系。该框架通过聚合相邻协作节点的信息来更新节点嵌入,在决策过程中充分利用了多个相邻节点增强的外部功能。该方法在保证模型全局最优的同时,有效地解决了紧急情况下的最优路径选择挑战。实验结果表明,与主流图神经网络模型相比,该模型通过平均绝对误差(MAE)、平均绝对百分比误差(MAPE)和均方根误差(RMSE)平均降低了约12.3%的路径预测误差。在多目标协同路径优化中,该算法使路径长度缩短了17.6%。这些结果证实了该模型在电力材料供应链路由任务中的有效性和优越性。与主流方法相比,所提出的解决方案还显示出显著的技术优势。它不仅保证了电力物流的运行效率,而且为物流行业应急条件下的多车多站协同作业提供了技术支持。
{"title":"Collaborative path optimization model of power material supply chain based on hash index spatio-temporal graph neural network","authors":"Lichong Cui ,&nbsp;Huayu Chu ,&nbsp;Junsheng Wang ,&nbsp;Wei Guo ,&nbsp;Fan Yang ,&nbsp;Zixi Hu","doi":"10.1016/j.array.2025.100598","DOIUrl":"10.1016/j.array.2025.100598","url":null,"abstract":"<div><div>The daily dispatching of materials in power systems involves multifaceted operations, including data analysis and logistics warehouse management. Current research on intelligent IoT mainly focuses on the static management of electrical materials and isolated dynamic dispatching schemes. It lacks a comprehensive spatio-temporal circulation design throughout the IoT-enabled distribution process. This gap hinders the implementation of efficient allocation mechanisms. This paper considers the coupling relationship between logistics collaborative data and spatio-temporal correlations. Using the Hash Index algorithm, the logistics data are transformed into multi-objective optimization composite functions. The proposed framework integrates Spatio-Temporal Graph Neural Networks (STGNNs) to model spatio-temporal relationships among nodes adjacent to abnormal coordinates in distribution paths. By aggregating information from neighboring collaborative nodes to update node embeddings, the framework leverages the enhanced external functions of multiple adjacent nodes in decision-making processes. This approach effectively resolves optimal path selection challenges under emergency conditions while ensuring global model optimization. Experimental results show that, compared to mainstream graph neural network models, the proposed model reduces path prediction errors by an average of approximately 12.3 %, as measured by Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Moreover, it shortens the path length by 17.6 % in multi-objective collaborative route optimization. These results confirm the model's effectiveness and superiority in routing tasks within the electric power material supply chain. The proposed solution also exhibits notable technical advantages over mainstream approaches. Additionally, it not only ensures operational efficiency in power logistics but also offers technical support for multi-vehicle and multi-station collaborative operations under emergency conditions in the logistics industry.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100598"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative Data Modeling Concepts for Big Data Analytics: Probabilistic Cardinality and Replicability Notations 大数据分析的创新数据建模概念:概率基数和可复制性符号
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100608
Jelena Hađina , Joshua Fogarty , Boris Jukić
The evolving practice of big data analytics encompasses the aggregation of data from multiple sources, with the imperative of delivering metrics and reports that maintain a high standard of reliability and consistency. As stakeholders may interpretat the data and associated metrics differently throughout the process, this often have to make assumptions, which can lead to inconsistencies in metrics aggregation. Our work addresses the limitation of traditional data modeling methods, which often fail to capture the nuances of the relationships among various data sources. We propose two conceptual data modeling concepts: probabilistic cardinality and metric replicability along with definitions, notation and illustrative examples, as well as the general big data analytics framework that is used for discussing the role and implementation of the concepts. Application of proposed concepts is illustrated through two applied case studies highlighting variety of ways in which they reduce risk of inconsistent aggregation and reporting of metrics.
不断发展的大数据分析实践包括来自多个来源的数据聚合,并且必须提供保持高可靠性和一致性标准的指标和报告。由于涉众在整个过程中可能会以不同的方式解释数据和相关的度量标准,这通常需要做出假设,这可能导致度量标准聚合中的不一致。我们的工作解决了传统数据建模方法的局限性,这些方法通常无法捕捉各种数据源之间关系的细微差别。我们提出了两个概念性数据建模概念:概率基数和度量可复制性,以及定义、符号和说明性示例,以及用于讨论这些概念的作用和实现的通用大数据分析框架。通过两个应用案例研究说明了所提出概念的应用,这些应用案例研究突出了各种方法,其中它们减少了不一致的聚合和度量报告的风险。
{"title":"Innovative Data Modeling Concepts for Big Data Analytics: Probabilistic Cardinality and Replicability Notations","authors":"Jelena Hađina ,&nbsp;Joshua Fogarty ,&nbsp;Boris Jukić","doi":"10.1016/j.array.2025.100608","DOIUrl":"10.1016/j.array.2025.100608","url":null,"abstract":"<div><div>The evolving practice of big data analytics encompasses the aggregation of data from multiple sources, with the imperative of delivering metrics and reports that maintain a high standard of reliability and consistency. As stakeholders may interpretat the data and associated metrics differently throughout the process, this often have to make assumptions, which can lead to inconsistencies in metrics aggregation. Our work addresses the limitation of traditional data modeling methods, which often fail to capture the nuances of the relationships among various data sources. We propose two conceptual data modeling concepts: probabilistic cardinality and metric replicability along with definitions, notation and illustrative examples, as well as the general big data analytics framework that is used for discussing the role and implementation of the concepts. Application of proposed concepts is illustrated through two applied case studies highlighting variety of ways in which they reduce risk of inconsistent aggregation and reporting of metrics.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100608"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing JIT Python runtime and parameter optimization for CPU-based Gaussian Splatting thumbnailer 利用JIT Python运行时和参数优化基于cpu的高斯飞溅缩略图
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100611
Evgeni Genchev , Dimitar Rangelov , Kars Waanders , Sierd Waanders
Gaussian Splatting has emerged as a powerful technique for high-fidelity 3D scene representation, yet its computational demands hinder rapid visualization, particularly on CPU-based systems. This paper introduces a lightweight method for efficient thumbnail generation from Gaussian splatting data, leveraging Just-in-Time (JIT) compilation in Python to optimize performance-critical operations. By integrating the Numba JIT compiler and strategically simplifying parameters, by omitting rotation data and approximating Gaussians as spheres, we achieve significant speed improvements while maintaining visual eligibility. Systematic experimentation with Gaussian splat sizes (σ) and image resolutions reveals optimal trade-offs: σ values of 0.4–0.5 balance detail and speed, allowing 720p thumbnail generation in 1.8 s. JIT compilation reduces execution time by 156×compared to pure Python (from 336 to 2.33 s), transforming Python into a viable tool for performance-sensitive tasks. The CPU-focused design ensures portability across devices, addressing resource-constrained scenarios like criminal investigations or field operations. Although limitations in Python’s inherent performance ceiling persist, this work demonstrates the potential of JIT-driven optimizations for lightweight 3D rendering, offering a pragmatic solution for rapid previews without GPU dependency. Future directions include migration to compiled languages and adaptive parameter tuning to further enhance scalability and real-time applicability.
高斯飞溅已经成为高保真3D场景表示的一种强大技术,但其计算需求阻碍了快速可视化,特别是在基于cpu的系统上。本文介绍了一种轻量级方法,用于从高斯飞溅数据高效生成缩略图,利用Python中的即时(JIT)编译来优化性能关键操作。通过集成Numba JIT编译器和战略性地简化参数,通过省略旋转数据和将高斯近似为球体,我们在保持视觉资格的同时实现了显着的速度提高。对高斯碎片大小(σ)和图像分辨率的系统实验揭示了最佳权衡:σ值为0.4-0.5平衡细节和速度,允许在1.8秒内生成720p缩略图。JIT编译将纯Python的执行时间缩短了156×compared(从336秒减少到2.33秒),将Python转变为执行性能敏感任务的可行工具。以cpu为中心的设计确保了跨设备的可移植性,解决了资源受限的情况,如刑事调查或现场操作。尽管Python固有性能上限的限制仍然存在,但这项工作证明了jit驱动的轻量级3D渲染优化的潜力,为不依赖GPU的快速预览提供了实用的解决方案。未来的方向包括迁移到编译语言和自适应参数调优,以进一步增强可伸缩性和实时适用性。
{"title":"Utilizing JIT Python runtime and parameter optimization for CPU-based Gaussian Splatting thumbnailer","authors":"Evgeni Genchev ,&nbsp;Dimitar Rangelov ,&nbsp;Kars Waanders ,&nbsp;Sierd Waanders","doi":"10.1016/j.array.2025.100611","DOIUrl":"10.1016/j.array.2025.100611","url":null,"abstract":"<div><div>Gaussian Splatting has emerged as a powerful technique for high-fidelity 3D scene representation, yet its computational demands hinder rapid visualization, particularly on CPU-based systems. This paper introduces a lightweight method for efficient thumbnail generation from Gaussian splatting data, leveraging Just-in-Time (JIT) compilation in Python to optimize performance-critical operations. By integrating the Numba JIT compiler and strategically simplifying parameters, by omitting rotation data and approximating Gaussians as spheres, we achieve significant speed improvements while maintaining visual eligibility. Systematic experimentation with Gaussian splat sizes (<span><math><mi>σ</mi></math></span>) and image resolutions reveals optimal trade-offs: <span><math><mi>σ</mi></math></span> values of 0.4–0.5 balance detail and speed, allowing 720p thumbnail generation in 1.8 s. JIT compilation reduces execution time by 156×compared to pure Python (from 336 to 2.33 s), transforming Python into a viable tool for performance-sensitive tasks. The CPU-focused design ensures portability across devices, addressing resource-constrained scenarios like criminal investigations or field operations. Although limitations in Python’s inherent performance ceiling persist, this work demonstrates the potential of JIT-driven optimizations for lightweight 3D rendering, offering a pragmatic solution for rapid previews without GPU dependency. Future directions include migration to compiled languages and adaptive parameter tuning to further enhance scalability and real-time applicability.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100611"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGPCN: A BERT and GPT-2-based Relational Graph Convolutional Network for hostile Hindi information detection BGPCN:一种基于BERT和gpt -2的关系图卷积网络,用于敌对印地语信息检测
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100601
Angana Chakraborty , Subhankar Joardar , Dilip K. Prasad , Arif Ahmed Sekh
The proliferation of hostile content on social media platforms, particularly in low-resource languages such as Hindi, poses significant challenges to maintaining a safe online environment. This study introduces the BGPCN model, which leverages the strengths of Bidirectional Encoder Representations from Transformers (BERT) & Generative Pre-trained Transformer 2 (GPT-2) embeddings integrated with a Relational Graph Convolutional Network (R-GCN) in order to identify hostile information in the language of Hindi. The model addresses both Coarse-grained (Hostile or Non-Hostile) and Fine-grained (Fake, Defamation, Hate, Offensive) classification tasks. The proposed model is evaluated on the Constraint 2021 Hindi dataset, outperforming the latest methodologies in terms of F1-Score of 0.9816, 0.85, 0.50, 0.62, 0.65 regarding both coarse-grained & fine-grained classifications. Comprehensive error analysis and ablation studies underscore the robustness of the BGPCN model while identifying opportunities for refinement. The findings demonstrate that BGPCN offers a reliable and scalable solution for hostile content detection, with potential applications in social media monitoring and content moderation. The data and code will be publicly accessible in https://github.com/mani-design/BGPCN.
社交媒体平台上敌对内容的激增,尤其是以印地语等资源匮乏的语言传播的内容,对维护安全的网络环境构成了重大挑战。本研究介绍了BGPCN模型,该模型利用了变形金刚(BERT)和生成预训练变形金刚2 (GPT-2)嵌入的双向编码器表示的优势,并将其与关系图卷积网络(R-GCN)集成在一起,以识别印地语中的敌对信息。该模型处理粗粒度(敌意或非敌意)和细粒度(虚假、诽谤、仇恨、攻击性)分类任务。该模型在Constraint 2021 Hindi数据集上进行了评估,在粗粒度和细粒度分类方面的F1-Score分别为0.9816、0.85、0.50、0.62和0.65,优于最新方法。综合误差分析和消融研究强调了BGPCN模型的鲁棒性,同时确定了改进的机会。研究结果表明,BGPCN为恶意内容检测提供了可靠且可扩展的解决方案,在社交媒体监控和内容审核方面具有潜在的应用前景。数据和代码将在https://github.com/mani-design/BGPCN上公开访问。
{"title":"BGPCN: A BERT and GPT-2-based Relational Graph Convolutional Network for hostile Hindi information detection","authors":"Angana Chakraborty ,&nbsp;Subhankar Joardar ,&nbsp;Dilip K. Prasad ,&nbsp;Arif Ahmed Sekh","doi":"10.1016/j.array.2025.100601","DOIUrl":"10.1016/j.array.2025.100601","url":null,"abstract":"<div><div>The proliferation of hostile content on social media platforms, particularly in low-resource languages such as Hindi, poses significant challenges to maintaining a safe online environment. This study introduces the BGPCN model, which leverages the strengths of Bidirectional Encoder Representations from Transformers (BERT) &amp; Generative Pre-trained Transformer 2 (GPT-2) embeddings integrated with a Relational Graph Convolutional Network (R-GCN) in order to identify hostile information in the language of Hindi. The model addresses both Coarse-grained (Hostile or Non-Hostile) and Fine-grained (Fake, Defamation, Hate, Offensive) classification tasks. The proposed model is evaluated on the Constraint 2021 Hindi dataset, outperforming the latest methodologies in terms of F1-Score of 0.9816, 0.85, 0.50, 0.62, 0.65 regarding both coarse-grained &amp; fine-grained classifications. Comprehensive error analysis and ablation studies underscore the robustness of the BGPCN model while identifying opportunities for refinement. The findings demonstrate that BGPCN offers a reliable and scalable solution for hostile content detection, with potential applications in social media monitoring and content moderation. The data and code will be publicly accessible in <span><span>https://github.com/mani-design/BGPCN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100601"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation of terrestrial whole-sky images using the new W-Net model with the stationary wavelet transform 2D 基于平稳小波变换二维W-Net模型的地面全天空图像语义分割
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100587
D.G. Fantini , R.N. Silva , M.B.B. Siqueira
This work proposes a novel deep learning model, named W-Net, focused on the semantic segmentation of whole-sky images obtained by fisheye cameras. The model is based on the use of two U-Net networks connected in series, interlinked by skip connections and attention skip connections. Additionally, the proposed approach incorporates a color space transformation layer that converts images from the RGB space to either HSV or CIE XYZ, followed by a feature extraction layer utilizing the 2D Wavelet Transform. Novel attention mechanisms are introduced, notably the one responsible for the transition of information between the two U-Nets. To evaluate the model’s performance, a comparative analysis was conducted against four well-established models in the literature. It is noteworthy that, while three of these models are designed for binary semantic segmentation, considering only the “Sky” and “Cloud” classes, the W-Net model employs multiclass semantic segmentation, differentiating among the “Sky”, “Sun”, “Sloud” and “Edge” categories. Experimental results demonstrate the superiority of the W-Net architecture. The unweighted version achieved a Mean Intersection over Union (MeanIoU) of 87.63%, a Dice coefficient of 96.30%, an overall Accuracy of 97.40%, and a Precision of 93.07%. The weighted W-Net further improved the results, achieving a MeanIoU of 87.79%, a Dice coefficient of 96.62%, an Accuracy of 97.41%, and a Precision of 89.89%. These outcomes confirm that the proposed model outperforms the benchmark methods, and that the inclusion of weighting enhances the detection of sun regions. Finally, a qualitative evaluation was performed through a visual comparison between the manually annotated masks and those generated by the proposed model.
本文提出了一种新的深度学习模型W-Net,专注于鱼眼相机获得的全天空图像的语义分割。该模型基于使用两个U-Net网络串联,通过跳过连接和注意跳过连接进行互连。此外,所提出的方法结合了一个颜色空间转换层,将图像从RGB空间转换为HSV或CIE XYZ,然后是一个利用二维小波变换的特征提取层。引入了新的注意机制,特别是负责两个u - net之间信息转移的注意机制。为了评估模型的性能,对文献中四个成熟的模型进行了比较分析。值得注意的是,其中三个模型是为二元语义分割设计的,只考虑了“Sky”和“Cloud”类,而W-Net模型采用了多类语义分割,区分了“Sky”、“Sun”、“Sloud”和“Edge”类别。实验结果证明了W-Net体系结构的优越性。未加权版本的Mean Intersection over Union (MeanIoU)为87.63%,Dice系数为96.30%,总体准确率为97.40%,精密度为93.07%。加权W-Net进一步改善了结果,达到了MeanIoU为87.79%,Dice系数为96.62%,准确率为97.41%,Precision为89.89%。这些结果证实了所提出的模型优于基准方法,并且加权的包含增强了对太阳区域的检测。最后,通过将人工标注的掩码与所提出模型生成的掩码进行视觉比较,进行定性评价。
{"title":"Semantic segmentation of terrestrial whole-sky images using the new W-Net model with the stationary wavelet transform 2D","authors":"D.G. Fantini ,&nbsp;R.N. Silva ,&nbsp;M.B.B. Siqueira","doi":"10.1016/j.array.2025.100587","DOIUrl":"10.1016/j.array.2025.100587","url":null,"abstract":"<div><div>This work proposes a novel deep learning model, named W-Net, focused on the semantic segmentation of whole-sky images obtained by fisheye cameras. The model is based on the use of two U-Net networks connected in series, interlinked by skip connections and attention skip connections. Additionally, the proposed approach incorporates a color space transformation layer that converts images from the RGB space to either HSV or CIE XYZ, followed by a feature extraction layer utilizing the 2D Wavelet Transform. Novel attention mechanisms are introduced, notably the one responsible for the transition of information between the two U-Nets. To evaluate the model’s performance, a comparative analysis was conducted against four well-established models in the literature. It is noteworthy that, while three of these models are designed for binary semantic segmentation, considering only the “Sky” and “Cloud” classes, the W-Net model employs multiclass semantic segmentation, differentiating among the “Sky”, “Sun”, “Sloud” and “Edge” categories. Experimental results demonstrate the superiority of the W-Net architecture. The unweighted version achieved a Mean Intersection over Union (MeanIoU) of 87.63%, a Dice coefficient of 96.30%, an overall Accuracy of 97.40%, and a Precision of 93.07%. The weighted W-Net further improved the results, achieving a MeanIoU of 87.79%, a Dice coefficient of 96.62%, an Accuracy of 97.41%, and a Precision of 89.89%. These outcomes confirm that the proposed model outperforms the benchmark methods, and that the inclusion of weighting enhances the detection of sun regions. Finally, a qualitative evaluation was performed through a visual comparison between the manually annotated masks and those generated by the proposed model.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100587"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of pruning on the efficiency and effectiveness of hybrid imbalanced multiclass classification models 修剪对杂交不平衡多类分类模型效率和有效性的影响
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100610
Esra’a Alshdaifat , Ala’a Al-Shdaifat , Fairouz Hussein
Hybrid models are recognized as one of the most effective approaches to address the imbalanced data problem. In these models, data-level methods such as over-sampling are combined with algorithm-level methods, such as ensemble approaches. However, the resulting models can face challenges concerning inefficiency and ineffectiveness. A solution to tackle these issues is proposed in this paper, which includes a novel weighted F1-ordered pruning technique integrated with two state-of-the-art hybrid models, Balanced Bagging and Balanced One-versus-One. Unlike prior hybrid models designed primarily to address the binary imbalance problem, the proposed approach is specifically designed to tackle the challenging multi-class classification imbalance problem. An extensive experimental evaluation and statistical validation were conducted, and demonstrated that the Pruned Balanced Bagging ensemble remarkably outperforms the considered hybrid models.
混合模型被认为是解决数据不平衡问题最有效的方法之一。在这些模型中,数据级方法(如过采样)与算法级方法(如集成方法)相结合。然而,由此产生的模型可能面临效率低下和无效的挑战。本文提出了一种解决这些问题的方法,其中包括一种新的加权f1有序修剪技术,该技术结合了两种最先进的混合模型,平衡装袋和平衡1对1。与先前主要用于解决二元不平衡问题的混合模型不同,该方法专门用于解决具有挑战性的多类分类不平衡问题。进行了广泛的实验评估和统计验证,并证明了修剪平衡套袋系统明显优于所考虑的混合模型。
{"title":"The effect of pruning on the efficiency and effectiveness of hybrid imbalanced multiclass classification models","authors":"Esra’a Alshdaifat ,&nbsp;Ala’a Al-Shdaifat ,&nbsp;Fairouz Hussein","doi":"10.1016/j.array.2025.100610","DOIUrl":"10.1016/j.array.2025.100610","url":null,"abstract":"<div><div>Hybrid models are recognized as one of the most effective approaches to address the imbalanced data problem. In these models, data-level methods such as over-sampling are combined with algorithm-level methods, such as ensemble approaches. However, the resulting models can face challenges concerning inefficiency and ineffectiveness. A solution to tackle these issues is proposed in this paper, which includes a novel weighted F1-ordered pruning technique integrated with two state-of-the-art hybrid models, Balanced Bagging and Balanced One-versus-One. Unlike prior hybrid models designed primarily to address the binary imbalance problem, the proposed approach is specifically designed to tackle the challenging multi-class classification imbalance problem. An extensive experimental evaluation and statistical validation were conducted, and demonstrated that the Pruned Balanced Bagging ensemble remarkably outperforms the considered hybrid models.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100610"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning based intelligent optimisation for bin packing problems: A review 基于强化学习的装箱问题智能优化:综述
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100616
Nadia Dahmani , Amril Nazir , Ikbal Taleb , Syed M. Salman Bukhari
The convergence of Reinforcement Learning (RL) and Bin Packing Problems (BPP) is a critical field of study that has profound ramifications in logistics, manufacturing, computer, and retail industries. This paper thoroughly examines the progression from simple rule-based tactics to advanced Deep Reinforcement Learning (DRL) techniques in solving BPPs. By conducting a thorough review of 231 papers conducted between 2019 and 2024, we address and provide answers to important research inquiries, such as “To what extent has academic research explored the use of RL for BPP during this time frame?” and “Which specific areas of application and methodologies have been predominantly used?” Our examination highlights a significant and rapid growth in research activity in this field. The study reveals a clear inclination towards DRL compared to traditional RL techniques, especially in complex, multi-dimensional BPP situations. It also identifies a growing interest in hybrid models and transfer learning methods as potential solutions to the challenges of scalability, computational requirements, and the exploration-exploitation trade-off. This study shows that some DRL models are highly effective in complex BPP scenarios. It suggests that future research should focus on scalability, operational efficiency, and the practical implementation of theoretical achievements in industry. This study aims to promote multidisciplinary discourse and collaboration in optimisation and artificial intelligence by comprehensively analysing current achievements and identifying the remaining problems.
强化学习(RL)和装箱问题(BPP)的融合是一个重要的研究领域,在物流、制造、计算机和零售行业具有深远的影响。本文深入研究了从简单的基于规则的策略到高级深度强化学习(DRL)技术在解决bpp方面的进展。通过对2019年至2024年间发表的231篇论文进行全面审查,我们解决并回答了一些重要的研究问题,例如“在这段时间内,学术研究在多大程度上探索了在BPP中使用RL ?”以及“哪些具体应用领域和方法被主要使用?”我们的研究突出了这一领域的研究活动显著而迅速的增长。该研究表明,与传统的RL技术相比,DRL明显倾向于DRL,特别是在复杂的多维BPP情况下。它还确定了对混合模型和迁移学习方法日益增长的兴趣,作为可伸缩性、计算需求和探索利用权衡挑战的潜在解决方案。研究表明,一些DRL模型在复杂的BPP场景中是非常有效的。未来的研究应着眼于可扩展性、运行效率以及理论成果在工业中的实际应用。本研究旨在通过综合分析现有成果和发现存在的问题,促进优化和人工智能领域的多学科对话和合作。
{"title":"Reinforcement learning based intelligent optimisation for bin packing problems: A review","authors":"Nadia Dahmani ,&nbsp;Amril Nazir ,&nbsp;Ikbal Taleb ,&nbsp;Syed M. Salman Bukhari","doi":"10.1016/j.array.2025.100616","DOIUrl":"10.1016/j.array.2025.100616","url":null,"abstract":"<div><div>The convergence of Reinforcement Learning (RL) and Bin Packing Problems (BPP) is a critical field of study that has profound ramifications in logistics, manufacturing, computer, and retail industries. This paper thoroughly examines the progression from simple rule-based tactics to advanced Deep Reinforcement Learning (DRL) techniques in solving BPPs. By conducting a thorough review of 231 papers conducted between 2019 and 2024, we address and provide answers to important research inquiries, such as “To what extent has academic research explored the use of RL for BPP during this time frame?” and “Which specific areas of application and methodologies have been predominantly used?” Our examination highlights a significant and rapid growth in research activity in this field. The study reveals a clear inclination towards DRL compared to traditional RL techniques, especially in complex, multi-dimensional BPP situations. It also identifies a growing interest in hybrid models and transfer learning methods as potential solutions to the challenges of scalability, computational requirements, and the exploration-exploitation trade-off. This study shows that some DRL models are highly effective in complex BPP scenarios. It suggests that future research should focus on scalability, operational efficiency, and the practical implementation of theoretical achievements in industry. This study aims to promote multidisciplinary discourse and collaboration in optimisation and artificial intelligence by comprehensively analysing current achievements and identifying the remaining problems.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100616"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-performance reversible data hiding scheme via block dynamic selection 基于块动态选择的高性能可逆数据隐藏方案
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100618
Zhengwei Zhang , Weien Xiao , Fenfen Li
Pixel value ordering (PVO) is a widely used reversible data hiding (RDH) technique that leverages pixel correlations within image blocks to generate high-fidelity stego-images. However, its embedding performance is limited by fixed block sizes, which fail to adapt to varying texture complexities. To address this issue, we propose a novel RDH method based on block dynamic selection. First, we employ a 2 × 3 image block as the basic embedding unit. In addition, we introduce a dual-layer embedding mechanism that partitions the cover image into checkerboard-like gray and white blocks, which enables the use of neighboring pixels to more accurately estimate the complexity of each block. For flat blocks with lower complexity values, we further subdivide the 2 × 3 block into two 1 × 3 sub-blocks, and a pixel-based pre-ordering scheme is proposed to determine the optimal ordering of pixels within the block, thereby increasing the number of expandable errors. For texture blocks, we utilize the adaptive pixel distribution density (APDD) to select the most suitable neighboring block for merging. By leveraging location information from two predicted pixels in the current block, APDD dynamically selects the optimal block, effectively enhancing its embedding potential. Experimental results demonstrate that the proposed method achieves a PSNR improvement of up to 1.46 dB compared to state-of-the-art methods under the same embedding capacity.
像素值排序(PVO)是一种广泛使用的可逆数据隐藏(RDH)技术,它利用图像块内的像素相关性来生成高保真的隐写图像。但是,它的嵌入性能受到固定块大小的限制,不能适应不同的纹理复杂度。为了解决这个问题,我们提出了一种新的基于块动态选择的RDH方法。首先,我们采用一个2 × 3的图像块作为基本嵌入单元。此外,我们引入了一种双层嵌入机制,将覆盖图像划分为棋盘状的灰色和白色块,这使得使用相邻像素可以更准确地估计每个块的复杂性。对于复杂度较低的平面块,我们进一步将2 × 3块细分为2个1 × 3子块,并提出基于像素的预排序方案来确定块内像素的最优排序,从而增加可扩展错误的数量。对于纹理块,我们利用自适应像素分布密度(APDD)选择最合适的相邻块进行合并。APDD利用当前块中两个预测像素的位置信息,动态选择最优块,有效增强了其嵌入潜力。实验结果表明,在相同的嵌入容量下,与现有方法相比,该方法的PSNR提高了1.46 dB。
{"title":"High-performance reversible data hiding scheme via block dynamic selection","authors":"Zhengwei Zhang ,&nbsp;Weien Xiao ,&nbsp;Fenfen Li","doi":"10.1016/j.array.2025.100618","DOIUrl":"10.1016/j.array.2025.100618","url":null,"abstract":"<div><div>Pixel value ordering (PVO) is a widely used reversible data hiding (RDH) technique that leverages pixel correlations within image blocks to generate high-fidelity stego-images. However, its embedding performance is limited by fixed block sizes, which fail to adapt to varying texture complexities. To address this issue, we propose a novel RDH method based on block dynamic selection. First, we employ a 2 × 3 image block as the basic embedding unit. In addition, we introduce a dual-layer embedding mechanism that partitions the cover image into checkerboard-like gray and white blocks, which enables the use of neighboring pixels to more accurately estimate the complexity of each block. For flat blocks with lower complexity values, we further subdivide the 2 × 3 block into two 1 × 3 sub-blocks, and a pixel-based pre-ordering scheme is proposed to determine the optimal ordering of pixels within the block, thereby increasing the number of expandable errors. For texture blocks, we utilize the adaptive pixel distribution density (APDD) to select the most suitable neighboring block for merging. By leveraging location information from two predicted pixels in the current block, APDD dynamically selects the optimal block, effectively enhancing its embedding potential. Experimental results demonstrate that the proposed method achieves a PSNR improvement of up to 1.46 dB compared to state-of-the-art methods under the same embedding capacity.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100618"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agile change approach for collaborative software development contexts: A systematic literature review 协作软件开发环境下的敏捷变更方法:系统的文献综述
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-01 DOI: 10.1016/j.array.2025.100595
José Luis González-Blázquez , Alicia García-Holgado , Francisco José García-Peñalvo
This systematic literature review examines how agile solutions can drive organizational change in collaborative open-source software (OSS) contexts. Motivated by persistent challenges in governance, alignment, contribution lifecycles, workflow, leadership, and measurement, the review asks which prescriptive and non-prescriptive agile approaches are being applied when organizations collaborate with OSS communities, and how these approaches mitigate those issues. The study first conducts an umbrella review (2000–2024) to confirm the gap and scope, then performs a main systematic review across digital libraries using inclusion, exclusion, and quality criteria. The synthesis maps findings to a conceptual framework of nine problem areas and two change paths. Results show a dominance of prescriptive methods, especially Scrum, LeSS, SAFe, and Kanban, for workflow transparency, dependency management, and coordination, while governance and leadership models remain underexplored. Building on this evidence, the paper proposes: (1) a prescriptive change approach for low-maturity organizations that integrates holacratic governance with Scrum/LeSS, Communities of Practice, Design Thinking for innovation, Management 3.0 leadership, and KPI-oriented cultures; and (2) a non-prescriptive approach for mature organizations based on unFIX's fractal organizational design, forums and collaboration patterns, delegation levels, and outcome-focused metrics to extend co-evolution with communities. The dual pathway enables organizations to select and sequence interventions that align with their paradigm and maturity, thereby bridging organizational and community boundaries to foster sustained agility. The review highlights open research needs on governance mechanisms, leadership in symbiotic ecosystems, and empirical evaluations of combined scaling approaches beyond SAFe, as well as longitudinal studies on alignment, dependency management, and measurement cultures in high-variability OSS environments.
这篇系统的文献综述探讨了敏捷解决方案如何在协作开源软件(OSS)环境中推动组织变革。在治理、一致性、贡献生命周期、工作流、领导和度量方面的持续挑战的激励下,评审询问了当组织与OSS社区协作时,应用了哪些说明性和非说明性敏捷方法,以及这些方法如何缓解这些问题。该研究首先进行了总括性审查(2000-2024年),以确认差距和范围,然后使用包含、排除和质量标准对数字图书馆进行了主要的系统审查。综合将发现映射到九个问题领域和两条变化路径的概念框架。结果显示,在工作流透明度、依赖关系管理和协调方面,规定性方法占主导地位,尤其是Scrum、LeSS、SAFe和看板,而治理和领导模型仍未得到充分探索。基于这一证据,本文提出:(1)针对低成熟度组织的规定性变革方法,该方法将整体治理与Scrum/LeSS、实践社区、创新设计思维、管理3.0领导力和以kpi为导向的文化相结合;(2)基于unFIX的分形组织设计、论坛和协作模式、授权级别和以结果为中心的度量标准,为成熟组织提供一种非规定性方法,以扩展与社区的共同进化。双重途径使组织能够选择和排序与其范例和成熟度相一致的干预措施,从而弥合组织和社区的边界,以促进持续的敏捷性。这篇综述强调了在治理机制、共生生态系统中的领导作用、超越SAFe的联合扩展方法的经验评估以及在高可变性OSS环境中对一致性、依赖管理和度量文化的纵向研究等方面的开放研究需求。
{"title":"Agile change approach for collaborative software development contexts: A systematic literature review","authors":"José Luis González-Blázquez ,&nbsp;Alicia García-Holgado ,&nbsp;Francisco José García-Peñalvo","doi":"10.1016/j.array.2025.100595","DOIUrl":"10.1016/j.array.2025.100595","url":null,"abstract":"<div><div>This systematic literature review examines how agile solutions can drive organizational change in collaborative open-source software (OSS) contexts. Motivated by persistent challenges in governance, alignment, contribution lifecycles, workflow, leadership, and measurement, the review asks which prescriptive and non-prescriptive agile approaches are being applied when organizations collaborate with OSS communities, and how these approaches mitigate those issues. The study first conducts an umbrella review (2000–2024) to confirm the gap and scope, then performs a main systematic review across digital libraries using inclusion, exclusion, and quality criteria. The synthesis maps findings to a conceptual framework of nine problem areas and two change paths. Results show a dominance of prescriptive methods, especially Scrum, LeSS, SAFe, and Kanban, for workflow transparency, dependency management, and coordination, while governance and leadership models remain underexplored. Building on this evidence, the paper proposes: (1) a prescriptive change approach for low-maturity organizations that integrates holacratic governance with Scrum/LeSS, Communities of Practice, Design Thinking for innovation, Management 3.0 leadership, and KPI-oriented cultures; and (2) a non-prescriptive approach for mature organizations based on unFIX's fractal organizational design, forums and collaboration patterns, delegation levels, and outcome-focused metrics to extend co-evolution with communities. The dual pathway enables organizations to select and sequence interventions that align with their paradigm and maturity, thereby bridging organizational and community boundaries to foster sustained agility. The review highlights open research needs on governance mechanisms, leadership in symbiotic ecosystems, and empirical evaluations of combined scaling approaches beyond SAFe, as well as longitudinal studies on alignment, dependency management, and measurement cultures in high-variability OSS environments.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100595"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Array
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1