首页 > 最新文献

Array最新文献

英文 中文
A high-dimensional many-objective co-optimization method on equipment utilization and maintenance 设备使用与维护的高维多目标协同优化方法
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-03 DOI: 10.1016/j.array.2025.100672
Yingli Yang , Weixing Song , Jingzhe Wang
In order to enhance the utilization and maintenance efficiency of equipment, grounded in the theory of preventive maintenance, the interrelationship between equipment usage and maintenance was analyzed. By proposing optimization objectives aimed at reducing the mean failure rate, preventive maintenance time, standard deviation of monthly motor-hour reserve, and maintenance costs, and comprehensively incorporating both preventive and corrective maintenance strategies, an optimization model for equipment utilization and maintenance was established. A DNA-based population encoding method, along with crossover and mutation operations was designed. Furthermore, an adaptive reinforcement learning algorithm was employed to adjust the reference vectors, and the NSGA-III algorithm is improved for simulation experiments. The model and algorithm are not only practical but also computationally efficient, scientifically sound, and applicable. Upon analyzing the simulation results, optimization strategies for equipment utilization preferences were proposed. These strategies can provide decision-making support for developing equipment deployment and maintenance plans for army.
为了提高设备的使用和维修效率,以预防性维修理论为基础,分析了设备使用与维修的相互关系。通过提出以降低平均故障率、预防性维修时间、月电机小时储备标准差和维修费用为优化目标,综合考虑预防性维修策略和纠正性维修策略,建立了设备利用和维修优化模型。设计了一种基于dna的种群编码方法,并进行了交叉和突变操作。采用自适应强化学习算法对参考向量进行调整,并对NSGA-III算法进行改进,进行仿真实验。该模型和算法不仅实用,而且计算效率高,科学合理,适用范围广。通过对仿真结果的分析,提出了设备利用偏好的优化策略。这些战略可以为军队装备部署和维护计划的制定提供决策支持。
{"title":"A high-dimensional many-objective co-optimization method on equipment utilization and maintenance","authors":"Yingli Yang ,&nbsp;Weixing Song ,&nbsp;Jingzhe Wang","doi":"10.1016/j.array.2025.100672","DOIUrl":"10.1016/j.array.2025.100672","url":null,"abstract":"<div><div>In order to enhance the utilization and maintenance efficiency of equipment, grounded in the theory of preventive maintenance, the interrelationship between equipment usage and maintenance was analyzed. By proposing optimization objectives aimed at reducing the mean failure rate, preventive maintenance time, standard deviation of monthly motor-hour reserve, and maintenance costs, and comprehensively incorporating both preventive and corrective maintenance strategies, an optimization model for equipment utilization and maintenance was established. A DNA-based population encoding method, along with crossover and mutation operations was designed. Furthermore, an adaptive reinforcement learning algorithm was employed to adjust the reference vectors, and the NSGA-III algorithm is improved for simulation experiments. The model and algorithm are not only practical but also computationally efficient, scientifically sound, and applicable. Upon analyzing the simulation results, optimization strategies for equipment utilization preferences were proposed. These strategies can provide decision-making support for developing equipment deployment and maintenance plans for army.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100672"},"PeriodicalIF":4.5,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tiny Machine Learning (TinyML): Research trends and future application opportunities 微型机器学习(TinyML):研究趋势和未来应用机会
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-03 DOI: 10.1016/j.array.2025.100674
Hui Han , Silvana Trimi , Sang M. Lee
Tiny Machine Learning (TinyML) enables artificial intelligence on low-power edge devices, yet a quantitative understanding of TinyML research remains limited. This study addresses this gap through a comprehensive bibliometric analysis of 392 peer-reviewed publications (2020–2024) from the Web of Science, using Biblioshiny and VOSviewer. This article contributes by mapping the first bibliometric structure of TinyML, identifying major trends (exponential publication growth, strong international collaboration, core research themes, key contributors, etc.) and proposing future directions (such as sustainable hardware, federated learning, ethical frameworks, etc.). The findings provide a scholarly foundation and strategic roadmap for advancing scalable, energy-efficient, and privacy-preserving TinyML applications.
微型机器学习(TinyML)可以在低功耗边缘设备上实现人工智能,但对TinyML研究的定量理解仍然有限。本研究通过使用Biblioshiny和VOSviewer对来自Web of Science的392篇同行评议出版物(2020-2024)进行全面的文献计量分析,解决了这一差距。本文通过绘制TinyML的第一个文献计量结构,确定主要趋势(指数出版物增长,强大的国际合作,核心研究主题,关键贡献者等)并提出未来方向(如可持续硬件,联邦学习,伦理框架等)。研究结果为推进可扩展、节能和保护隐私的TinyML应用程序提供了学术基础和战略路线图。
{"title":"Tiny Machine Learning (TinyML): Research trends and future application opportunities","authors":"Hui Han ,&nbsp;Silvana Trimi ,&nbsp;Sang M. Lee","doi":"10.1016/j.array.2025.100674","DOIUrl":"10.1016/j.array.2025.100674","url":null,"abstract":"<div><div>Tiny Machine Learning (TinyML) enables artificial intelligence on low-power edge devices, yet a quantitative understanding of TinyML research remains limited. This study addresses this gap through a comprehensive bibliometric analysis of 392 peer-reviewed publications (2020–2024) from the Web of Science, using Biblioshiny and VOSviewer. This article contributes by mapping the first bibliometric structure of TinyML, identifying major trends (exponential publication growth, strong international collaboration, core research themes, key contributors, etc.) and proposing future directions (such as sustainable hardware, federated learning, ethical frameworks, etc.). The findings provide a scholarly foundation and strategic roadmap for advancing scalable, energy-efficient, and privacy-preserving TinyML applications.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100674"},"PeriodicalIF":4.5,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking the adversarial resilience of machine learning models for DDoS detection 对DDoS检测机器学习模型的对抗弹性进行基准测试
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-03 DOI: 10.1016/j.array.2025.100664
Harsh Dadhwal , Mateus de Abreu , Nazanin Parvizi , Sajal Saha
Distributed Denial of Service (DDoS) attacks continue to grow in scale and sophistication, making timely and reliable detection increasingly challenging. Machine learning (ML) models have demonstrated promise in identifying malicious traffic patterns. However, their vulnerability to adversarial manipulation remains a critical security concern. This study benchmarks the adversarial robustness of several standalone ML models trained on clean traffic from the CIC-IDS2017 dataset. Adversarial perturbations are generated using the Fast Gradient Sign Method (FGSM). Baseline robustness is assessed by evaluating each model on FGSM adversarial samples generated at ϵ=0.1. Adversarial training is then performed by augmenting the clean dataset with FGSM-generated samples, after which models are evaluated across multiple perturbation strengths (ϵ=0.1 to 3.0). Experimental findings show that boosting-based models, particularly XGBoost and LightGBM, demonstrate the highest resilience under adversarial stress. In contrast, models such as Logistic Regression and MLP experience significant performance degradation, even after adversarial training. Despite this, adversarial training substantially improves robustness across all models, highlighting its effectiveness in mitigating FGSM-induced decision boundary shifts. This benchmarking study underscores the importance of adversarial training and model selection when deploying ML-based intrusion detection systems. Boosting ensembles consistently provide superior robustness, while linear and neural models remain more susceptible to perturbations.
分布式拒绝服务(DDoS)攻击的规模和复杂程度持续增长,使得及时可靠的检测变得越来越困难。机器学习(ML)模型在识别恶意流量模式方面已经证明了其前景。然而,它们对对抗性操纵的脆弱性仍然是一个关键的安全问题。本研究对来自CIC-IDS2017数据集的干净流量训练的几个独立ML模型的对抗鲁棒性进行了基准测试。利用快速梯度符号法(FGSM)生成对抗性扰动。基线稳健性是通过评估在ε =0.1处生成的FGSM对抗样本上的每个模型来评估的。然后,通过使用fgsm生成的样本增强干净的数据集来进行对抗性训练,然后跨多个扰动强度(ε =0.1至3.0)评估模型。实验结果表明,基于增强的模型,特别是XGBoost和LightGBM,在对抗压力下表现出最高的恢复能力。相比之下,逻辑回归和MLP等模型即使在对抗性训练之后也会经历显著的性能下降。尽管如此,对抗性训练大大提高了所有模型的鲁棒性,突出了它在减轻fgsm引起的决策边界转移方面的有效性。这项基准研究强调了部署基于机器学习的入侵检测系统时对抗性训练和模型选择的重要性。增强集成始终提供优越的鲁棒性,而线性和神经模型仍然更容易受到扰动。
{"title":"Benchmarking the adversarial resilience of machine learning models for DDoS detection","authors":"Harsh Dadhwal ,&nbsp;Mateus de Abreu ,&nbsp;Nazanin Parvizi ,&nbsp;Sajal Saha","doi":"10.1016/j.array.2025.100664","DOIUrl":"10.1016/j.array.2025.100664","url":null,"abstract":"<div><div>Distributed Denial of Service (DDoS) attacks continue to grow in scale and sophistication, making timely and reliable detection increasingly challenging. Machine learning (ML) models have demonstrated promise in identifying malicious traffic patterns. However, their vulnerability to adversarial manipulation remains a critical security concern. This study benchmarks the adversarial robustness of several standalone ML models trained on clean traffic from the CIC-IDS2017 dataset. Adversarial perturbations are generated using the Fast Gradient Sign Method (FGSM). Baseline robustness is assessed by evaluating each model on FGSM adversarial samples generated at <span><math><mrow><mi>ϵ</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>1</mn></mrow></math></span>. Adversarial training is then performed by augmenting the clean dataset with FGSM-generated samples, after which models are evaluated across multiple perturbation strengths (<span><math><mrow><mi>ϵ</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>1</mn></mrow></math></span> to <span><math><mrow><mn>3</mn><mo>.</mo><mn>0</mn></mrow></math></span>). Experimental findings show that boosting-based models, particularly XGBoost and LightGBM, demonstrate the highest resilience under adversarial stress. In contrast, models such as Logistic Regression and MLP experience significant performance degradation, even after adversarial training. Despite this, adversarial training substantially improves robustness across all models, highlighting its effectiveness in mitigating FGSM-induced decision boundary shifts. This benchmarking study underscores the importance of adversarial training and model selection when deploying ML-based intrusion detection systems. Boosting ensembles consistently provide superior robustness, while linear and neural models remain more susceptible to perturbations.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100664"},"PeriodicalIF":4.5,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VEGAN: CCTV video quality enhancement with GAN-based foreground separation and super-resolution VEGAN:利用基于gan的前景分离和超分辨率增强CCTV视频质量
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-03 DOI: 10.1016/j.array.2025.100673
Ali Asghar , Wareesa Sharif , Amna Shifa
As countries rapidly transition toward smart cities, closed-circuit television (CCTV) surveillance systems are playing an increasingly vital role in ensuring public safety and enabling urban analytics. However, the visual quality of CCTV footage is often degraded by environmental factors (e.g., motion blur, low resolution, and poor illumination), which significantly impact the quality of service, as well as the reliability and effectiveness of these systems. To address these issues, this research proposes a generative adversarial network (GAN)-based framework, named VEGAN (Video Enhancement with Generative Adversarial Network), that combines reconstruction, adversarial, and facial component losses with adaptive balancing to optimise visual sharpness, temporal stability, and identity preservation. VEGAN integrates four key components: (1) a pixel counter, which identifies high and low quality frames; (2) Modified MoCoGAN, which separates foreground and background features to disentangle content from motion; (3) a Recurrent Neural Network, which captures complex temporal and motion patterns; and (4) a Super-Resolution module, which enhances low-quality pixels to recover fine spatial details. The enhanced foreground generated by these combined modules is seamlessly fused with a high-quality background frame, resulting in substantially improved overall video quality. Experimental evaluations demonstrate VEGAN’s effectiveness, achieving an average Learned Perceptual Image Patch Similarity (LPIPS) score of 0.041 and a Video Multimethod Assessment Fusion (VMAF) score of 56.13, indicating significant perceptual and quantitative improvements. These findings highlight VEGAN’s effectiveness in video-based analytics, supporting more accurate performance in tasks such as event detection and activity recognition.
随着各国迅速向智慧城市过渡,闭路电视(CCTV)监控系统在确保公共安全和实现城市分析方面发挥着越来越重要的作用。然而,闭路电视镜头的视觉质量经常受到环境因素的影响(例如,运动模糊、低分辨率和照明不足),这严重影响了服务质量,以及这些系统的可靠性和有效性。为了解决这些问题,本研究提出了一种基于生成对抗网络(GAN)的框架,名为VEGAN(基于生成对抗网络的视频增强),该框架将重建、对抗和面部成分损失与自适应平衡相结合,以优化视觉清晰度、时间稳定性和身份保存。VEGAN集成了四个关键组件:(1)像素计数器,用于识别高质量和低质量的帧;(2)改进MoCoGAN,将前景与背景特征分离,将内容与运动分离;(3)捕获复杂时间和运动模式的递归神经网络;(4)超分辨率模块,增强低质量像素以恢复精细的空间细节。这些组合模块生成的增强前景与高质量的背景帧无缝融合,从而大大提高了整体视频质量。实验评估证明了VEGAN的有效性,实现了平均学习感知图像补丁相似度(LPIPS)得分为0.041,视频多方法评估融合(VMAF)得分为56.13,表明显著的感知和定量改进。这些发现突出了VEGAN在基于视频的分析中的有效性,支持在事件检测和活动识别等任务中更准确的表现。
{"title":"VEGAN: CCTV video quality enhancement with GAN-based foreground separation and super-resolution","authors":"Ali Asghar ,&nbsp;Wareesa Sharif ,&nbsp;Amna Shifa","doi":"10.1016/j.array.2025.100673","DOIUrl":"10.1016/j.array.2025.100673","url":null,"abstract":"<div><div>As countries rapidly transition toward smart cities, closed-circuit television (CCTV) surveillance systems are playing an increasingly vital role in ensuring public safety and enabling urban analytics. However, the visual quality of CCTV footage is often degraded by environmental factors (e.g., motion blur, low resolution, and poor illumination), which significantly impact the quality of service, as well as the reliability and effectiveness of these systems. To address these issues, this research proposes a generative adversarial network (GAN)-based framework, named VEGAN (Video Enhancement with Generative Adversarial Network), that combines reconstruction, adversarial, and facial component losses with adaptive balancing to optimise visual sharpness, temporal stability, and identity preservation. VEGAN integrates four key components: (1) a pixel counter, which identifies high and low quality frames; (2) Modified MoCoGAN, which separates foreground and background features to disentangle content from motion; (3) a Recurrent Neural Network, which captures complex temporal and motion patterns; and (4) a Super-Resolution module, which enhances low-quality pixels to recover fine spatial details. The enhanced foreground generated by these combined modules is seamlessly fused with a high-quality background frame, resulting in substantially improved overall video quality. Experimental evaluations demonstrate VEGAN’s effectiveness, achieving an average Learned Perceptual Image Patch Similarity (LPIPS) score of 0.041 and a Video Multimethod Assessment Fusion (VMAF) score of 56.13, indicating significant perceptual and quantitative improvements. These findings highlight VEGAN’s effectiveness in video-based analytics, supporting more accurate performance in tasks such as event detection and activity recognition.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100673"},"PeriodicalIF":4.5,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive load classification during online shopping using deep learning on time series eye movement indices 基于时间序列眼动指数的深度学习在线购物认知负荷分类
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-03 DOI: 10.1016/j.array.2025.100669
Sunu Wibirama , Muhammad Ainul Fikri , Iman Kahfi Aliza , Kristian Adi Nugraha , Syukron Abu Ishaq Alfarozi , Noor Akhmad Setiawan , Ahmad Riznandi Suhari , Sri Kusrohmaniah
Cognitive load classification during online shopping activities is important to understand the user experience of e-commerce. Traditional classification methods that rely on proprietary software and obtrusive physiological measures often result in inconsistent performance. To address this research gap, we propose a novel approach that leverages deep learning to analyze raw eye movement data during online shopping tasks with low and high cognitive load. The Attention-based Long Short-Term Memory Fully Convolutional Network (ALSTM-FCN) model outperformed other machine learning and deep learning models with an average accuracy and F1 score of 97.70% and 97.69%, respectively. Cognitive load was also measured using the NASA TLX questionnaire, which showed significantly higher scores in high cognitive load tasks for all dimensions: “Mental Demand” (39.37, p=0.001), “Performance” (46.12, p=0.004), “Effort” (51.92, p=0.002), and “Frustration Level” (60.53, p=0.001). Based on the analysis of eye movement features used in cognitive load classification, we found that the variability in eye movement during tasks with low and high cognitive loads was predominantly spatial rather than temporal (p<0.05). Our findings indicate a strong correlation between the deep learning-based classification of raw eye movement data and subjective cognitive load assessments. This study demonstrates the potential of using an affordable eye tracking sensor to classify cognitive load without being constrained by the capability of proprietary software.
网络购物活动中的认知负荷分类对于理解电子商务的用户体验具有重要意义。传统的分类方法依赖于专有软件和突兀的生理测量,结果往往不一致。为了解决这一研究空白,我们提出了一种利用深度学习来分析低认知负荷和高认知负荷在线购物任务期间的原始眼动数据的新方法。基于注意的长短期记忆全卷积网络(ALSTM-FCN)模型的平均准确率和F1分数分别为97.70%和97.69%,优于其他机器学习和深度学习模型。认知负荷也采用NASA TLX问卷进行测量,结果显示,在高认知负荷任务中,“心理需求”(39.37,p=0.001)、“表现”(46.12,p=0.004)、“努力”(51.92,p=0.002)和“沮丧程度”(60.53,p=0.001)的各维度得分均显著较高。基于对认知负荷分类中眼动特征的分析,我们发现低负荷和高负荷任务时眼动的变异性主要是空间变异性而非时间变异性(p<0.05)。我们的研究结果表明,基于深度学习的原始眼动数据分类与主观认知负荷评估之间存在很强的相关性。这项研究证明了使用价格合理的眼动追踪传感器对认知负荷进行分类的潜力,而不受专有软件能力的限制。
{"title":"Cognitive load classification during online shopping using deep learning on time series eye movement indices","authors":"Sunu Wibirama ,&nbsp;Muhammad Ainul Fikri ,&nbsp;Iman Kahfi Aliza ,&nbsp;Kristian Adi Nugraha ,&nbsp;Syukron Abu Ishaq Alfarozi ,&nbsp;Noor Akhmad Setiawan ,&nbsp;Ahmad Riznandi Suhari ,&nbsp;Sri Kusrohmaniah","doi":"10.1016/j.array.2025.100669","DOIUrl":"10.1016/j.array.2025.100669","url":null,"abstract":"<div><div>Cognitive load classification during online shopping activities is important to understand the user experience of e-commerce. Traditional classification methods that rely on proprietary software and obtrusive physiological measures often result in inconsistent performance. To address this research gap, we propose a novel approach that leverages deep learning to analyze raw eye movement data during online shopping tasks with low and high cognitive load. The Attention-based Long Short-Term Memory Fully Convolutional Network (ALSTM-FCN) model outperformed other machine learning and deep learning models with an average accuracy and F1 score of 97.70% and 97.69%, respectively. Cognitive load was also measured using the NASA TLX questionnaire, which showed significantly higher scores in high cognitive load tasks for all dimensions: “Mental Demand” (39.37, <span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>), “Performance” (46.12, <span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>004</mn></mrow></math></span>), “Effort” (51.92, <span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>002</mn></mrow></math></span>), and “Frustration Level” (60.53, <span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>). Based on the analysis of eye movement features used in cognitive load classification, we found that the variability in eye movement during tasks with low and high cognitive loads was predominantly spatial rather than temporal (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>). Our findings indicate a strong correlation between the deep learning-based classification of raw eye movement data and subjective cognitive load assessments. This study demonstrates the potential of using an affordable eye tracking sensor to classify cognitive load without being constrained by the capability of proprietary software.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100669"},"PeriodicalIF":4.5,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-rate real-time simulation: Techniques, models, frameworks, and challenges 多速率实时仿真:技术、模型、框架和挑战
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-31 DOI: 10.1016/j.array.2025.100661
Hossein Taghizad, Michel Lemaire, Daniel Massicotte
This review examines multi-rate real-time simulation (MR-RTS) techniques, models, and frameworks. Although established for years, these approaches continue to evolve to address increasingly complex systems. The paper highlights the advantages of multi-rate simulations over traditional fixed-rate methods, emphasizing their ability to adapt simulation rates to the needs of individual subsystems. It also outlines key evaluation criteria, guidance for selecting suitable frameworks, and major challenges in implementing MR-RTS, along with recommendations to overcome them.
这篇综述探讨了多速率实时仿真(MR-RTS)技术、模型和框架。尽管这些方法已经建立多年,但它们仍在不断发展,以解决日益复杂的系统。本文强调了多速率模拟相对于传统固定速率方法的优势,强调了它们能够根据单个子系统的需要调整模拟速率。它还概述了关键的评估标准、选择合适框架的指导、实施MR-RTS的主要挑战,以及克服这些挑战的建议。
{"title":"Multi-rate real-time simulation: Techniques, models, frameworks, and challenges","authors":"Hossein Taghizad,&nbsp;Michel Lemaire,&nbsp;Daniel Massicotte","doi":"10.1016/j.array.2025.100661","DOIUrl":"10.1016/j.array.2025.100661","url":null,"abstract":"<div><div>This review examines multi-rate real-time simulation (MR-RTS) techniques, models, and frameworks. Although established for years, these approaches continue to evolve to address increasingly complex systems. The paper highlights the advantages of multi-rate simulations over traditional fixed-rate methods, emphasizing their ability to adapt simulation rates to the needs of individual subsystems. It also outlines key evaluation criteria, guidance for selecting suitable frameworks, and major challenges in implementing MR-RTS, along with recommendations to overcome them.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100661"},"PeriodicalIF":4.5,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agentic artificial intelligence is the future of cancer detection and diagnosis 人工智能是癌症检测和诊断的未来
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-31 DOI: 10.1016/j.array.2025.100676
Sayedur Rahman , Md. Tanzib Hosain , Nafiz Fahad , Md. Kishor Morol , Md. Jakir Hossen
Agentic artificial intelligence systems, particularly Large Language Models (LLMs) and Vision-Language Models (VLMs), are a big change in oncology because they can find and diagnose cancer in ways that have never been done before. In accordance with PRISMA 2020 criteria, we conducted a systematic search across nine databases from January 2023 to September 2025, reviewing 3986 records and incorporating 123 papers that assessed agentic AI in cancer detection and diagnosis. Research demonstrated swift expansion (91.9% published in 2024-2025) across various cancer kinds, with breast (22.0%) and lung cancer (13.8%) being the most extensively examined. GPT-4 versions showed performance similar to that of human experts: they found errors better than pathologists (89.5% vs. 88.5%), classified skin lesions as well as dermatologists (84.8% vs. 84.6%), and staged ovarian cancer with 97% accuracy compared to 88% by radiologists. Zero-shot LLMs consistently surpassed conventional supervised models. But there were big problems, like factual errors in 15%–41% of instances, algorithmic bias, and low agreement with tumor boards (50%–70%). Agentic AI has a lot of promise for finding cancer, especially in organized tasks. However, the research so far suggests that it should be used as an aid rather than an independent system. Concerns about reliability and bias in algorithms are two of the most important impediments. Future priorities encompass Retrieval-Augmented Generation(RAG) systems, domain-specific models, and forthcoming trials to ascertain clinical value.
人工智能系统,特别是大型语言模型(llm)和视觉语言模型(vlm),是肿瘤学领域的一个重大变革,因为它们可以以前所未有的方式发现和诊断癌症。根据PRISMA 2020标准,我们在2023年1月至2025年9月期间对9个数据库进行了系统检索,审查了3986条记录,并纳入了123篇评估人工智能在癌症检测和诊断中的应用的论文。研究表明,在各种癌症类型中,乳腺癌(22.0%)和肺癌(13.8%)的研究范围迅速扩大(2024-2025年发表了91.9%)。GPT-4版本的表现与人类专家相似:它们比病理学家更容易发现错误(89.5%对88.5%),对皮肤病变的分类和皮肤科医生一样好(84.8%对84.6%),对卵巢癌的分期准确率为97%,而放射科医生的准确率为88%。零射击llm始终优于传统的监督模型。但也存在一些大问题,比如15%-41%的案例存在事实错误、算法偏差以及与肿瘤委员会的一致性较低(50%-70%)。人工智能在发现癌症方面有很大的前景,尤其是在有组织的任务中。然而,迄今为止的研究表明,它应该被用作一种辅助手段,而不是一个独立的系统。对可靠性和算法偏差的担忧是两个最重要的障碍。未来的优先事项包括检索增强生成(RAG)系统、领域特定模型和即将进行的试验,以确定临床价值。
{"title":"Agentic artificial intelligence is the future of cancer detection and diagnosis","authors":"Sayedur Rahman ,&nbsp;Md. Tanzib Hosain ,&nbsp;Nafiz Fahad ,&nbsp;Md. Kishor Morol ,&nbsp;Md. Jakir Hossen","doi":"10.1016/j.array.2025.100676","DOIUrl":"10.1016/j.array.2025.100676","url":null,"abstract":"<div><div>Agentic artificial intelligence systems, particularly Large Language Models (LLMs) and Vision-Language Models (VLMs), are a big change in oncology because they can find and diagnose cancer in ways that have never been done before. In accordance with PRISMA 2020 criteria, we conducted a systematic search across nine databases from January 2023 to September 2025, reviewing 3986 records and incorporating 123 papers that assessed agentic AI in cancer detection and diagnosis. Research demonstrated swift expansion (91.9% published in 2024-2025) across various cancer kinds, with breast (22.0%) and lung cancer (13.8%) being the most extensively examined. GPT-4 versions showed performance similar to that of human experts: they found errors better than pathologists (89.5% vs. 88.5%), classified skin lesions as well as dermatologists (84.8% vs. 84.6%), and staged ovarian cancer with 97% accuracy compared to 88% by radiologists. Zero-shot LLMs consistently surpassed conventional supervised models. But there were big problems, like factual errors in 15%–41% of instances, algorithmic bias, and low agreement with tumor boards (50%–70%). Agentic AI has a lot of promise for finding cancer, especially in organized tasks. However, the research so far suggests that it should be used as an aid rather than an independent system. Concerns about reliability and bias in algorithms are two of the most important impediments. Future priorities encompass Retrieval-Augmented Generation(RAG) systems, domain-specific models, and forthcoming trials to ascertain clinical value.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100676"},"PeriodicalIF":4.5,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart driving with AI: A review of CNN approaches to drowsiness detection 人工智能智能驾驶:CNN睡意检测方法综述
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-31 DOI: 10.1016/j.array.2025.100675
Riadul Islam Rabbi , Poh Ping Em , Md. Jakir Hossen
Drowsy driving is widespread and a significant cause of traffic accidents, and thus poses a serious threat to life and property around the globe. Therefore, real-time driver drowsiness detection has emerged as a primary study area, particularly due to the current advancements that incorporate artificial intelligence (AI) into automobiles. Convolutional Neural Networks (CNNs) have recently been very effective in handling image data and feature extraction for detecting drowsiness based on facial and eye movement patterns. This review paper focuses on the different CNN architectures and models that exist in the field of driver drowsiness detection and their strengths and limitations. Models like VGGNet, ResNet, and Inception V3 that are used in CNN are elaborated using pseudocode for an easy understanding of how they can be implemented practically. This paper also examines new trends in lightweight CNNs for edge computing as a solution to demands for real-time analytics in constrained environments such as vehicles. Moreover, important issues like data bias, model overfitting, and computational constraints are discussed. Additionally, future perspectives are provided to address these challenges, such as the integration of hybrid models and fusion of multimodal data. This review aims to provide a comprehensive understanding of CNN-based drowsiness detection and assist in developing safe and reliable automotive applications.
疲劳驾驶普遍存在,是造成交通事故的重要原因,在全球范围内对生命财产构成严重威胁。因此,实时驾驶员困倦检测已经成为一个主要的研究领域,特别是由于目前将人工智能(AI)融入汽车的进步。卷积神经网络(cnn)最近在处理图像数据和基于面部和眼球运动模式检测睡意的特征提取方面非常有效。本文主要介绍了目前存在于驾驶员困倦检测领域的不同CNN架构和模型,以及它们的优势和局限性。在CNN中使用的VGGNet、ResNet和Inception V3等模型都是使用伪代码进行阐述的,以便于理解如何在实际中实现它们。本文还研究了用于边缘计算的轻量级cnn的新趋势,作为解决车辆等受限环境中实时分析需求的解决方案。此外,还讨论了数据偏差、模型过拟合和计算约束等重要问题。此外,还提供了解决这些挑战的未来展望,例如混合模型的集成和多模态数据的融合。本文旨在全面了解基于cnn的困倦检测,并协助开发安全可靠的汽车应用。
{"title":"Smart driving with AI: A review of CNN approaches to drowsiness detection","authors":"Riadul Islam Rabbi ,&nbsp;Poh Ping Em ,&nbsp;Md. Jakir Hossen","doi":"10.1016/j.array.2025.100675","DOIUrl":"10.1016/j.array.2025.100675","url":null,"abstract":"<div><div>Drowsy driving is widespread and a significant cause of traffic accidents, and thus poses a serious threat to life and property around the globe. Therefore, real-time driver drowsiness detection has emerged as a primary study area, particularly due to the current advancements that incorporate artificial intelligence (AI) into automobiles. Convolutional Neural Networks (CNNs) have recently been very effective in handling image data and feature extraction for detecting drowsiness based on facial and eye movement patterns. This review paper focuses on the different CNN architectures and models that exist in the field of driver drowsiness detection and their strengths and limitations. Models like VGGNet, ResNet, and Inception V3 that are used in CNN are elaborated using pseudocode for an easy understanding of how they can be implemented practically. This paper also examines new trends in lightweight CNNs for edge computing as a solution to demands for real-time analytics in constrained environments such as vehicles. Moreover, important issues like data bias, model overfitting, and computational constraints are discussed. Additionally, future perspectives are provided to address these challenges, such as the integration of hybrid models and fusion of multimodal data. This review aims to provide a comprehensive understanding of CNN-based drowsiness detection and assist in developing safe and reliable automotive applications.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100675"},"PeriodicalIF":4.5,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trans-ADENet: Transformer-based Attention-guided Deep Ensemble Network for high-dimensional data classification Trans-ADENet:用于高维数据分类的基于变压器的注意力引导深度集成网络
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-30 DOI: 10.1016/j.array.2025.100659
Venkaiah Chowdary Bhimineni, Rajiv Senapati
High-dimensional (HD) biomedical data, such as gene expression profiles and ECG signals, pose significant challenges for machine learning (ML) due to limited sample size, feature redundancy, and noisy distributions. Conventional models tend to overfit, whereas boosting and ensemble approaches struggle with irrelevant features. Deep autoencoders (DAE) reduce nonlinear dimensionality but miss complex dependencies, whereas transformers require large datasets to model long-range relationships through self-attention mechanisms. We propose a Transformer-based Attention-guided Deep Ensemble Network (Trans-ADENet) that integrates dimensionality reduction, attention-driven feature learning, and meta-level ensemble fusion in an end-to-end framework. A deep autoencoder compresses HD inputs into compact latent representations, refined by a Transformer Encoder with multi-head self-attention. Refined features are fed to diverse base classifiers (CatBoost, Support Vector Machine (SVM), TabNet, and Generalized Multi-Layer Perceptron (GMLP)), and their outputs are fused by a meta-MLP, which learns adaptive weights to yield robust predictions. Experiments on breast, leukemia, INCART2 and Thyroid-RNA datasets achieved 96.3%, 94.1%, 92.7% and 94.6% accuracy, surpassing state-of-the-art models in terms of accuracy, F1, precision, recall, and AUC. By combining representation learning, attention, and adaptive fusion, Trans-ADENet delivers accurate, interpretable classification for biomedical tasks.
高维(HD)生物医学数据,如基因表达谱和心电信号,由于样本量有限、特征冗余和噪声分布,对机器学习(ML)构成了重大挑战。传统模型倾向于过度拟合,而增强和集成方法则与不相关的特征作斗争。深度自编码器(DAE)减少了非线性维度,但忽略了复杂的依赖关系,而变压器需要大型数据集来通过自关注机制建模长期关系。我们提出了一种基于transformer的注意力引导深度集成网络(Trans-ADENet),该网络在端到端框架中集成了降维、注意力驱动特征学习和元级集成融合。深度自编码器将HD输入压缩成紧凑的潜在表示,由具有多头自关注的变压器编码器进行细化。将精炼的特征输入到不同的基本分类器(CatBoost、支持向量机(SVM)、TabNet和广义多层感知器(GMLP))中,它们的输出由元mlp融合,该元mlp学习自适应权重以产生稳健的预测。在乳腺癌、白血病、INCART2和Thyroid-RNA数据集上的实验,准确率分别达到96.3%、94.1%、92.7%和94.6%,在准确率、F1、精密度、召回率和AUC方面都超过了目前最先进的模型。通过结合表征学习、注意和自适应融合,Trans-ADENet为生物医学任务提供了准确的、可解释的分类。
{"title":"Trans-ADENet: Transformer-based Attention-guided Deep Ensemble Network for high-dimensional data classification","authors":"Venkaiah Chowdary Bhimineni,&nbsp;Rajiv Senapati","doi":"10.1016/j.array.2025.100659","DOIUrl":"10.1016/j.array.2025.100659","url":null,"abstract":"<div><div>High-dimensional (HD) biomedical data, such as gene expression profiles and ECG signals, pose significant challenges for machine learning (ML) due to limited sample size, feature redundancy, and noisy distributions. Conventional models tend to overfit, whereas boosting and ensemble approaches struggle with irrelevant features. Deep autoencoders (DAE) reduce nonlinear dimensionality but miss complex dependencies, whereas transformers require large datasets to model long-range relationships through self-attention mechanisms. We propose a Transformer-based Attention-guided Deep Ensemble Network (Trans-ADENet) that integrates dimensionality reduction, attention-driven feature learning, and meta-level ensemble fusion in an end-to-end framework. A deep autoencoder compresses HD inputs into compact latent representations, refined by a Transformer Encoder with multi-head self-attention. Refined features are fed to diverse base classifiers (CatBoost, Support Vector Machine (SVM), TabNet, and Generalized Multi-Layer Perceptron (GMLP)), and their outputs are fused by a meta-MLP, which learns adaptive weights to yield robust predictions. Experiments on breast, leukemia, INCART2 and Thyroid-RNA datasets achieved 96.3%, 94.1%, 92.7% and 94.6% accuracy, surpassing state-of-the-art models in terms of accuracy, F1, precision, recall, and AUC. By combining representation learning, attention, and adaptive fusion, Trans-ADENet delivers accurate, interpretable classification for biomedical tasks.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100659"},"PeriodicalIF":4.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Convolutional Neural Networks (F-CNNs) for privacy-preserving multi-class skin lesion classification 联邦卷积神经网络(f - cnn)用于保护隐私的多类皮肤病变分类
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-30 DOI: 10.1016/j.array.2025.100667
Khadija Shahzad , Anum Khashir , Hina Tufail , Abdul Ahad , Zahra Ali , Filipe Madeira , Ivan Miguel Pires
Skin lesions include a variety of abnormalities found on the skin. These may be benign (not cancerous) or malignant (cancerous). Every year, the number of cases of skin cancer increases globally, increasing the death rate. Medical data is scarce because people are reluctant to provide their health information due to privacy concerns. In this research, a decentralized machine learning approach, Federated learning, is the primary focus of the discipline to preserve patient data. Using this method, models are trained independently on several dispersed devices without sharing the data. To balance the data and enrich the dataset, the Synthetic Minority Over-sampling technique with Edited Nearest Neighbors (SMOTEENN) is used in this study. The HAM10000 dataset was benchmarked using a Convolutional Neural Network (CNN). Seven classes of HAM10000 include vascular skin lesions, benign keratosis, actinic keratosis, melanoma, dermatofibroma, and melanocytic nevi. A centralized method yields an accuracy of 99.39%, and f1-score, precision, and recall of 99.00%. A simulated Federated learning with three clients, ten rounds, and thirty training epochs produced 93.00% precision, 92.00% recall, 92.00% f1-score, and 91.80% accuracy, respectively. At the same time, an increase to four clients and thirty training epochs produced an accuracy, recall, precision, and f1-score of 97.00% with ten rounds.
皮肤病变包括在皮肤上发现的各种异常。这些可能是良性的(非癌性的)或恶性的(癌性的)。每年,全球皮肤癌病例数都在增加,死亡率也在增加。医疗数据很少,因为人们出于隐私考虑不愿提供自己的健康信息。在这项研究中,分散的机器学习方法,联邦学习,是该学科保存患者数据的主要焦点。使用该方法,模型在多个分散的设备上独立训练,而不共享数据。为了平衡数据和丰富数据集,本研究使用了编辑近邻的合成少数派过采样技术(SMOTEENN)。HAM10000数据集使用卷积神经网络(CNN)进行基准测试。HAM10000分为7类:血管性皮肤病变、良性角化病、光化性角化病、黑色素瘤、皮肤纤维瘤、黑素细胞痣。集中式方法的准确率为99.39%,f1-score、精密度和召回率为99.00%。模拟的联邦学习有3个客户端、10轮和30个训练周期,分别产生了93.00%的准确率、92.00%的召回率、92.00%的f1得分和91.80%的准确率。与此同时,增加到4个客户和30个训练周期,在10轮训练中,准确率、召回率、精确度和f1得分达到97.00%。
{"title":"Federated Convolutional Neural Networks (F-CNNs) for privacy-preserving multi-class skin lesion classification","authors":"Khadija Shahzad ,&nbsp;Anum Khashir ,&nbsp;Hina Tufail ,&nbsp;Abdul Ahad ,&nbsp;Zahra Ali ,&nbsp;Filipe Madeira ,&nbsp;Ivan Miguel Pires","doi":"10.1016/j.array.2025.100667","DOIUrl":"10.1016/j.array.2025.100667","url":null,"abstract":"<div><div>Skin lesions include a variety of abnormalities found on the skin. These may be benign (not cancerous) or malignant (cancerous). Every year, the number of cases of skin cancer increases globally, increasing the death rate. Medical data is scarce because people are reluctant to provide their health information due to privacy concerns. In this research, a decentralized machine learning approach, Federated learning, is the primary focus of the discipline to preserve patient data. Using this method, models are trained independently on several dispersed devices without sharing the data. To balance the data and enrich the dataset, the Synthetic Minority Over-sampling technique with Edited Nearest Neighbors (SMOTEENN) is used in this study. The HAM10000 dataset was benchmarked using a Convolutional Neural Network (CNN). Seven classes of HAM10000 include vascular skin lesions, benign keratosis, actinic keratosis, melanoma, dermatofibroma, and melanocytic nevi. A centralized method yields an accuracy of 99.39%, and f1-score, precision, and recall of 99.00%. A simulated Federated learning with three clients, ten rounds, and thirty training epochs produced 93.00% precision, 92.00% recall, 92.00% f1-score, and 91.80% accuracy, respectively. At the same time, an increase to four clients and thirty training epochs produced an accuracy, recall, precision, and f1-score of 97.00% with ten rounds.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100667"},"PeriodicalIF":4.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145973363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Array
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1